May 17 00:23:20.902919 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:23:20.902943 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:23:20.902955 kernel: BIOS-provided physical RAM map: May 17 00:23:20.902962 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:23:20.902968 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 17 00:23:20.902974 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 May 17 00:23:20.902982 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved May 17 00:23:20.902989 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 17 00:23:20.902996 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 17 00:23:20.903005 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 17 00:23:20.903012 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 17 00:23:20.903018 kernel: NX (Execute Disable) protection: active May 17 00:23:20.903025 kernel: APIC: Static calls initialized May 17 00:23:20.903032 kernel: efi: EFI v2.7 by EDK II May 17 00:23:20.903041 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 17 00:23:20.903051 kernel: SMBIOS 2.7 present. May 17 00:23:20.903059 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 17 00:23:20.903066 kernel: Hypervisor detected: KVM May 17 00:23:20.903074 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:23:20.903082 kernel: kvm-clock: using sched offset of 4093030194 cycles May 17 00:23:20.903090 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:23:20.903098 kernel: tsc: Detected 2499.996 MHz processor May 17 00:23:20.903106 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:23:20.903114 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:23:20.903122 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 17 00:23:20.903132 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 17 00:23:20.903140 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:23:20.903147 kernel: Using GB pages for direct mapping May 17 00:23:20.903155 kernel: Secure boot disabled May 17 00:23:20.903162 kernel: ACPI: Early table checksum verification disabled May 17 00:23:20.903170 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 17 00:23:20.903178 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 17 00:23:20.903186 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 17 00:23:20.903194 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 17 00:23:20.903204 kernel: ACPI: FACS 0x00000000789D0000 000040 May 17 00:23:20.903211 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 17 00:23:20.903219 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 17 00:23:20.903227 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 17 00:23:20.903234 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 17 00:23:20.903242 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 17 00:23:20.903254 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:23:20.903265 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:23:20.903273 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 17 00:23:20.903282 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 17 00:23:20.903290 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 17 00:23:20.903298 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 17 00:23:20.903306 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 17 00:23:20.903314 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 17 00:23:20.903325 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 17 00:23:20.903333 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 17 00:23:20.903341 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 17 00:23:20.903349 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 17 00:23:20.903358 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 17 00:23:20.903366 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 17 00:23:20.903374 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:23:20.903382 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:23:20.903390 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 17 00:23:20.903401 kernel: NUMA: Initialized distance table, cnt=1 May 17 00:23:20.903408 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 17 00:23:20.903417 kernel: Zone ranges: May 17 00:23:20.903626 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:23:20.903650 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 17 00:23:20.903659 kernel: Normal empty May 17 00:23:20.903668 kernel: Movable zone start for each node May 17 00:23:20.903677 kernel: Early memory node ranges May 17 00:23:20.903685 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:23:20.903699 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 17 00:23:20.903707 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 17 00:23:20.903716 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 17 00:23:20.903724 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:23:20.903732 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:23:20.903741 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 17 00:23:20.903750 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 17 00:23:20.903758 kernel: ACPI: PM-Timer IO Port: 0xb008 May 17 00:23:20.903767 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:23:20.903777 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 17 00:23:20.903786 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:23:20.903794 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:23:20.903802 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:23:20.903811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:23:20.903819 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:23:20.903827 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:23:20.903836 kernel: TSC deadline timer available May 17 00:23:20.903844 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:23:20.903852 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:23:20.903863 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 17 00:23:20.903871 kernel: Booting paravirtualized kernel on KVM May 17 00:23:20.903880 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:23:20.903888 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:23:20.903897 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:23:20.903905 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:23:20.903913 kernel: pcpu-alloc: [0] 0 1 May 17 00:23:20.903921 kernel: kvm-guest: PV spinlocks enabled May 17 00:23:20.903930 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:23:20.903942 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:23:20.903951 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:23:20.903960 kernel: random: crng init done May 17 00:23:20.903968 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:23:20.903976 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:23:20.903984 kernel: Fallback order for Node 0: 0 May 17 00:23:20.903993 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 17 00:23:20.904001 kernel: Policy zone: DMA32 May 17 00:23:20.904012 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:23:20.904021 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 162936K reserved, 0K cma-reserved) May 17 00:23:20.904029 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:23:20.904038 kernel: Kernel/User page tables isolation: enabled May 17 00:23:20.904046 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:23:20.904054 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:23:20.904063 kernel: Dynamic Preempt: voluntary May 17 00:23:20.904071 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:23:20.904080 kernel: rcu: RCU event tracing is enabled. May 17 00:23:20.904091 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:23:20.904099 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:23:20.904108 kernel: Rude variant of Tasks RCU enabled. May 17 00:23:20.904116 kernel: Tracing variant of Tasks RCU enabled. May 17 00:23:20.904124 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:23:20.904132 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:23:20.904141 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:23:20.904159 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:23:20.904168 kernel: Console: colour dummy device 80x25 May 17 00:23:20.904177 kernel: printk: console [tty0] enabled May 17 00:23:20.904185 kernel: printk: console [ttyS0] enabled May 17 00:23:20.904194 kernel: ACPI: Core revision 20230628 May 17 00:23:20.904206 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 17 00:23:20.904215 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:23:20.904224 kernel: x2apic enabled May 17 00:23:20.904233 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:23:20.904242 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 17 00:23:20.904253 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) May 17 00:23:20.904262 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:23:20.904271 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:23:20.904280 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:23:20.904289 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:23:20.904298 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:23:20.904306 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:23:20.904315 kernel: RETBleed: Vulnerable May 17 00:23:20.904324 kernel: Speculative Store Bypass: Vulnerable May 17 00:23:20.904335 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:23:20.904344 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:23:20.904353 kernel: GDS: Unknown: Dependent on hypervisor status May 17 00:23:20.904361 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:23:20.904370 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:23:20.904379 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:23:20.904387 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 00:23:20.904396 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 00:23:20.904405 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:23:20.904414 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:23:20.904423 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:23:20.904472 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:23:20.904481 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:23:20.904490 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 00:23:20.904499 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 00:23:20.904508 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 17 00:23:20.904517 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 17 00:23:20.904526 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 17 00:23:20.904535 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 17 00:23:20.904544 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 17 00:23:20.904553 kernel: Freeing SMP alternatives memory: 32K May 17 00:23:20.904561 kernel: pid_max: default: 32768 minimum: 301 May 17 00:23:20.904570 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:23:20.904582 kernel: landlock: Up and running. May 17 00:23:20.904591 kernel: SELinux: Initializing. May 17 00:23:20.904599 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:23:20.904608 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:23:20.904617 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:23:20.904626 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:23:20.904636 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:23:20.904645 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:23:20.904654 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:23:20.904663 kernel: signal: max sigframe size: 3632 May 17 00:23:20.904675 kernel: rcu: Hierarchical SRCU implementation. May 17 00:23:20.904684 kernel: rcu: Max phase no-delay instances is 400. May 17 00:23:20.904693 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:23:20.904702 kernel: smp: Bringing up secondary CPUs ... May 17 00:23:20.904712 kernel: smpboot: x86: Booting SMP configuration: May 17 00:23:20.904721 kernel: .... node #0, CPUs: #1 May 17 00:23:20.904730 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 17 00:23:20.904739 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:23:20.904751 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:23:20.904760 kernel: smpboot: Max logical packages: 1 May 17 00:23:20.904769 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) May 17 00:23:20.904778 kernel: devtmpfs: initialized May 17 00:23:20.904787 kernel: x86/mm: Memory block size: 128MB May 17 00:23:20.904797 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 17 00:23:20.904806 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:23:20.904815 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:23:20.904824 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:23:20.904835 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:23:20.904844 kernel: audit: initializing netlink subsys (disabled) May 17 00:23:20.904854 kernel: audit: type=2000 audit(1747441401.481:1): state=initialized audit_enabled=0 res=1 May 17 00:23:20.904863 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:23:20.904872 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:23:20.904881 kernel: cpuidle: using governor menu May 17 00:23:20.904890 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:23:20.904899 kernel: dca service started, version 1.12.1 May 17 00:23:20.904908 kernel: PCI: Using configuration type 1 for base access May 17 00:23:20.904920 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:23:20.904929 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:23:20.904938 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:23:20.904948 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:23:20.904958 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:23:20.904967 kernel: ACPI: Added _OSI(Module Device) May 17 00:23:20.904975 kernel: ACPI: Added _OSI(Processor Device) May 17 00:23:20.904984 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:23:20.904994 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:23:20.905005 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 17 00:23:20.905014 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:23:20.905023 kernel: ACPI: Interpreter enabled May 17 00:23:20.905032 kernel: ACPI: PM: (supports S0 S5) May 17 00:23:20.905041 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:23:20.905051 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:23:20.905060 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:23:20.905070 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:23:20.905079 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:23:20.905248 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:23:20.905351 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 17 00:23:20.905495 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 17 00:23:20.905512 kernel: acpiphp: Slot [3] registered May 17 00:23:20.905521 kernel: acpiphp: Slot [4] registered May 17 00:23:20.905530 kernel: acpiphp: Slot [5] registered May 17 00:23:20.905540 kernel: acpiphp: Slot [6] registered May 17 00:23:20.905549 kernel: acpiphp: Slot [7] registered May 17 00:23:20.905562 kernel: acpiphp: Slot [8] registered May 17 00:23:20.905571 kernel: acpiphp: Slot [9] registered May 17 00:23:20.905580 kernel: acpiphp: Slot [10] registered May 17 00:23:20.905589 kernel: acpiphp: Slot [11] registered May 17 00:23:20.905598 kernel: acpiphp: Slot [12] registered May 17 00:23:20.905607 kernel: acpiphp: Slot [13] registered May 17 00:23:20.905616 kernel: acpiphp: Slot [14] registered May 17 00:23:20.905625 kernel: acpiphp: Slot [15] registered May 17 00:23:20.905634 kernel: acpiphp: Slot [16] registered May 17 00:23:20.905646 kernel: acpiphp: Slot [17] registered May 17 00:23:20.905655 kernel: acpiphp: Slot [18] registered May 17 00:23:20.905664 kernel: acpiphp: Slot [19] registered May 17 00:23:20.905673 kernel: acpiphp: Slot [20] registered May 17 00:23:20.905682 kernel: acpiphp: Slot [21] registered May 17 00:23:20.905691 kernel: acpiphp: Slot [22] registered May 17 00:23:20.905700 kernel: acpiphp: Slot [23] registered May 17 00:23:20.905708 kernel: acpiphp: Slot [24] registered May 17 00:23:20.905717 kernel: acpiphp: Slot [25] registered May 17 00:23:20.905728 kernel: acpiphp: Slot [26] registered May 17 00:23:20.905737 kernel: acpiphp: Slot [27] registered May 17 00:23:20.905746 kernel: acpiphp: Slot [28] registered May 17 00:23:20.905755 kernel: acpiphp: Slot [29] registered May 17 00:23:20.905763 kernel: acpiphp: Slot [30] registered May 17 00:23:20.905772 kernel: acpiphp: Slot [31] registered May 17 00:23:20.905781 kernel: PCI host bridge to bus 0000:00 May 17 00:23:20.905878 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:23:20.905961 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:23:20.906046 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:23:20.906128 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:23:20.906210 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 17 00:23:20.906291 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:23:20.906400 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:23:20.906514 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:23:20.906617 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 17 00:23:20.906707 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 17 00:23:20.906800 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 17 00:23:20.906891 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 17 00:23:20.906981 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 17 00:23:20.907072 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 17 00:23:20.907165 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 17 00:23:20.907262 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 17 00:23:20.907394 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 17 00:23:20.907525 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 17 00:23:20.907617 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 17 00:23:20.907707 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 17 00:23:20.907798 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:23:20.908556 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 17 00:23:20.908675 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 17 00:23:20.908774 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 17 00:23:20.908866 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 17 00:23:20.908878 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:23:20.908888 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:23:20.908896 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:23:20.908906 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:23:20.908918 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:23:20.908927 kernel: iommu: Default domain type: Translated May 17 00:23:20.908936 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:23:20.908945 kernel: efivars: Registered efivars operations May 17 00:23:20.908954 kernel: PCI: Using ACPI for IRQ routing May 17 00:23:20.908963 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:23:20.908972 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 17 00:23:20.908981 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 17 00:23:20.909071 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 17 00:23:20.909163 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 17 00:23:20.909252 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:23:20.909263 kernel: vgaarb: loaded May 17 00:23:20.909272 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 17 00:23:20.909281 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 17 00:23:20.909290 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:23:20.909299 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:23:20.909308 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:23:20.909317 kernel: pnp: PnP ACPI init May 17 00:23:20.909329 kernel: pnp: PnP ACPI: found 5 devices May 17 00:23:20.909338 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:23:20.909347 kernel: NET: Registered PF_INET protocol family May 17 00:23:20.909356 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:23:20.909365 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:23:20.909374 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:23:20.909383 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:23:20.909392 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:23:20.909403 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:23:20.909412 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:23:20.909421 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:23:20.910512 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:23:20.910533 kernel: NET: Registered PF_XDP protocol family May 17 00:23:20.910651 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:23:20.910750 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:23:20.911621 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:23:20.911720 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:23:20.911810 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 17 00:23:20.911913 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:23:20.911925 kernel: PCI: CLS 0 bytes, default 64 May 17 00:23:20.911935 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:23:20.911945 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 17 00:23:20.911954 kernel: clocksource: Switched to clocksource tsc May 17 00:23:20.911963 kernel: Initialise system trusted keyrings May 17 00:23:20.911972 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:23:20.911984 kernel: Key type asymmetric registered May 17 00:23:20.911993 kernel: Asymmetric key parser 'x509' registered May 17 00:23:20.912002 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:23:20.912011 kernel: io scheduler mq-deadline registered May 17 00:23:20.912020 kernel: io scheduler kyber registered May 17 00:23:20.912029 kernel: io scheduler bfq registered May 17 00:23:20.912038 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:23:20.912047 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:23:20.912056 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:23:20.912066 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:23:20.912077 kernel: i8042: Warning: Keylock active May 17 00:23:20.912086 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:23:20.912095 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:23:20.912192 kernel: rtc_cmos 00:00: RTC can wake from S4 May 17 00:23:20.912279 kernel: rtc_cmos 00:00: registered as rtc0 May 17 00:23:20.912363 kernel: rtc_cmos 00:00: setting system clock to 2025-05-17T00:23:20 UTC (1747441400) May 17 00:23:20.913832 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 17 00:23:20.913858 kernel: intel_pstate: CPU model not supported May 17 00:23:20.913869 kernel: efifb: probing for efifb May 17 00:23:20.913878 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k May 17 00:23:20.913888 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 00:23:20.913896 kernel: efifb: scrolling: redraw May 17 00:23:20.913906 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:23:20.913915 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:23:20.913924 kernel: fb0: EFI VGA frame buffer device May 17 00:23:20.913934 kernel: pstore: Using crash dump compression: deflate May 17 00:23:20.913945 kernel: pstore: Registered efi_pstore as persistent store backend May 17 00:23:20.913954 kernel: NET: Registered PF_INET6 protocol family May 17 00:23:20.913963 kernel: Segment Routing with IPv6 May 17 00:23:20.913972 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:23:20.913981 kernel: NET: Registered PF_PACKET protocol family May 17 00:23:20.913990 kernel: Key type dns_resolver registered May 17 00:23:20.913999 kernel: IPI shorthand broadcast: enabled May 17 00:23:20.914025 kernel: sched_clock: Marking stable (511001811, 139950974)->(732805996, -81853211) May 17 00:23:20.914037 kernel: registered taskstats version 1 May 17 00:23:20.914047 kernel: Loading compiled-in X.509 certificates May 17 00:23:20.914059 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:23:20.914068 kernel: Key type .fscrypt registered May 17 00:23:20.914077 kernel: Key type fscrypt-provisioning registered May 17 00:23:20.914086 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:23:20.914096 kernel: ima: Allocated hash algorithm: sha1 May 17 00:23:20.914105 kernel: ima: No architecture policies found May 17 00:23:20.914115 kernel: clk: Disabling unused clocks May 17 00:23:20.914124 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:23:20.914136 kernel: Write protecting the kernel read-only data: 36864k May 17 00:23:20.914146 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:23:20.914155 kernel: Run /init as init process May 17 00:23:20.914165 kernel: with arguments: May 17 00:23:20.914174 kernel: /init May 17 00:23:20.914183 kernel: with environment: May 17 00:23:20.914192 kernel: HOME=/ May 17 00:23:20.914201 kernel: TERM=linux May 17 00:23:20.914211 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:23:20.914225 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:23:20.914237 systemd[1]: Detected virtualization amazon. May 17 00:23:20.914247 systemd[1]: Detected architecture x86-64. May 17 00:23:20.914257 systemd[1]: Running in initrd. May 17 00:23:20.914267 systemd[1]: No hostname configured, using default hostname. May 17 00:23:20.914276 systemd[1]: Hostname set to . May 17 00:23:20.914286 systemd[1]: Initializing machine ID from VM UUID. May 17 00:23:20.914299 systemd[1]: Queued start job for default target initrd.target. May 17 00:23:20.914309 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:23:20.914320 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:23:20.914331 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:23:20.914340 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:23:20.914350 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:23:20.914361 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:23:20.914374 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:23:20.914385 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:23:20.914395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:23:20.914405 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:23:20.914415 systemd[1]: Reached target paths.target - Path Units. May 17 00:23:20.914458 systemd[1]: Reached target slices.target - Slice Units. May 17 00:23:20.914471 systemd[1]: Reached target swap.target - Swaps. May 17 00:23:20.914485 systemd[1]: Reached target timers.target - Timer Units. May 17 00:23:20.914495 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:23:20.914505 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:23:20.914516 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:23:20.914526 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:23:20.914535 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:23:20.914549 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:23:20.914559 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:23:20.914569 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:23:20.914578 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:23:20.914588 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:23:20.914598 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:23:20.914608 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:23:20.914618 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:23:20.914628 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:23:20.914641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:23:20.914651 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:23:20.914661 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:23:20.914671 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:23:20.914701 systemd-journald[178]: Collecting audit messages is disabled. May 17 00:23:20.914727 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:23:20.914737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:20.914747 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:23:20.914761 systemd-journald[178]: Journal started May 17 00:23:20.914782 systemd-journald[178]: Runtime Journal (/run/log/journal/ec25a558bc2bc4ca2a5be9aad7ee2e2f) is 4.7M, max 38.2M, 33.4M free. May 17 00:23:20.909629 systemd-modules-load[179]: Inserted module 'overlay' May 17 00:23:20.921122 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:23:20.929750 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:23:20.933586 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:23:20.935796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:23:20.947540 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:23:20.947569 kernel: Bridge firewalling registered May 17 00:23:20.942406 systemd-modules-load[179]: Inserted module 'br_netfilter' May 17 00:23:20.943471 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:23:20.952957 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:23:20.955765 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:23:20.960327 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:23:20.961029 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:23:20.971605 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:23:20.975367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:23:20.980621 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:23:20.986621 dracut-cmdline[212]: dracut-dracut-053 May 17 00:23:20.989716 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:23:21.006467 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:23:21.009909 systemd-resolved[215]: Positive Trust Anchors: May 17 00:23:21.010524 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:23:21.010562 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:23:21.015809 systemd-resolved[215]: Defaulting to hostname 'linux'. May 17 00:23:21.017230 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:23:21.018102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:23:21.065515 kernel: SCSI subsystem initialized May 17 00:23:21.075451 kernel: Loading iSCSI transport class v2.0-870. May 17 00:23:21.086449 kernel: iscsi: registered transport (tcp) May 17 00:23:21.107830 kernel: iscsi: registered transport (qla4xxx) May 17 00:23:21.107901 kernel: QLogic iSCSI HBA Driver May 17 00:23:21.150400 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:23:21.154646 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:23:21.180761 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:23:21.180833 kernel: device-mapper: uevent: version 1.0.3 May 17 00:23:21.183207 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:23:21.224487 kernel: raid6: avx512x4 gen() 17895 MB/s May 17 00:23:21.242492 kernel: raid6: avx512x2 gen() 17934 MB/s May 17 00:23:21.260483 kernel: raid6: avx512x1 gen() 17860 MB/s May 17 00:23:21.278488 kernel: raid6: avx2x4 gen() 17863 MB/s May 17 00:23:21.295486 kernel: raid6: avx2x2 gen() 17834 MB/s May 17 00:23:21.313617 kernel: raid6: avx2x1 gen() 13738 MB/s May 17 00:23:21.313700 kernel: raid6: using algorithm avx512x2 gen() 17934 MB/s May 17 00:23:21.332606 kernel: raid6: .... xor() 24920 MB/s, rmw enabled May 17 00:23:21.332674 kernel: raid6: using avx512x2 recovery algorithm May 17 00:23:21.354468 kernel: xor: automatically using best checksumming function avx May 17 00:23:21.518460 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:23:21.528990 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:23:21.538643 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:23:21.551840 systemd-udevd[398]: Using default interface naming scheme 'v255'. May 17 00:23:21.556809 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:23:21.565698 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:23:21.584463 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation May 17 00:23:21.614610 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:23:21.621711 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:23:21.671703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:23:21.679676 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:23:21.704315 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:23:21.707012 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:23:21.708927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:23:21.710069 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:23:21.714675 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:23:21.741850 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:23:21.761561 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 17 00:23:21.761824 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 17 00:23:21.768460 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:23:21.772460 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 17 00:23:21.785448 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:7e:92:8d:f4:05 May 17 00:23:21.799451 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:23:21.802458 kernel: AES CTR mode by8 optimization enabled May 17 00:23:21.804942 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. May 17 00:23:21.806845 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:23:21.807002 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:23:21.808668 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:23:21.809580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:23:21.810519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:21.812561 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:23:21.825605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:23:21.843008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:23:21.856100 kernel: nvme nvme0: pci function 0000:00:04.0 May 17 00:23:21.856340 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:23:21.843132 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:21.859605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:23:21.863148 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 17 00:23:21.872357 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:23:21.872418 kernel: GPT:9289727 != 16777215 May 17 00:23:21.872450 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:23:21.873911 kernel: GPT:9289727 != 16777215 May 17 00:23:21.873957 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:23:21.873975 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:23:21.879270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:21.888783 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:23:21.907082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:23:21.971628 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (461) May 17 00:23:21.983179 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 17 00:23:21.984801 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (446) May 17 00:23:22.031758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:23:22.037166 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 17 00:23:22.037740 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 17 00:23:22.044615 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 17 00:23:22.055640 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:23:22.062154 disk-uuid[633]: Primary Header is updated. May 17 00:23:22.062154 disk-uuid[633]: Secondary Entries is updated. May 17 00:23:22.062154 disk-uuid[633]: Secondary Header is updated. May 17 00:23:22.067459 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:23:22.073456 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:23:22.082262 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:23:23.089738 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:23:23.089795 disk-uuid[634]: The operation has completed successfully. May 17 00:23:23.194010 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:23:23.194108 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:23:23.216612 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:23:23.220003 sh[977]: Success May 17 00:23:23.242448 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:23:23.343404 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:23:23.362565 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:23:23.365028 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:23:23.392614 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:23:23.392679 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:23:23.394912 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:23:23.398738 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:23:23.398794 kernel: BTRFS info (device dm-0): using free space tree May 17 00:23:23.523455 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:23:23.547575 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:23:23.548598 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:23:23.552582 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:23:23.555554 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:23:23.581377 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:23:23.581460 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:23:23.581477 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:23:23.588454 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:23:23.597230 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:23:23.600461 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:23:23.605318 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:23:23.611582 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:23:23.642980 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:23:23.649729 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:23:23.670014 systemd-networkd[1169]: lo: Link UP May 17 00:23:23.670022 systemd-networkd[1169]: lo: Gained carrier May 17 00:23:23.671241 systemd-networkd[1169]: Enumeration completed May 17 00:23:23.671578 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:23:23.671582 systemd-networkd[1169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:23:23.672891 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:23:23.673651 systemd[1]: Reached target network.target - Network. May 17 00:23:23.674733 systemd-networkd[1169]: eth0: Link UP May 17 00:23:23.674742 systemd-networkd[1169]: eth0: Gained carrier May 17 00:23:23.674752 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:23:23.691523 systemd-networkd[1169]: eth0: DHCPv4 address 172.31.18.208/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:23:24.065118 ignition[1124]: Ignition 2.19.0 May 17 00:23:24.065130 ignition[1124]: Stage: fetch-offline May 17 00:23:24.065329 ignition[1124]: no configs at "/usr/lib/ignition/base.d" May 17 00:23:24.065338 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:23:24.066874 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:23:24.065743 ignition[1124]: Ignition finished successfully May 17 00:23:24.073660 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:23:24.085885 ignition[1178]: Ignition 2.19.0 May 17 00:23:24.085896 ignition[1178]: Stage: fetch May 17 00:23:24.086258 ignition[1178]: no configs at "/usr/lib/ignition/base.d" May 17 00:23:24.086273 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:23:24.086353 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:23:24.118387 ignition[1178]: PUT result: OK May 17 00:23:24.120613 ignition[1178]: parsed url from cmdline: "" May 17 00:23:24.120668 ignition[1178]: no config URL provided May 17 00:23:24.120678 ignition[1178]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:23:24.120701 ignition[1178]: no config at "/usr/lib/ignition/user.ign" May 17 00:23:24.120719 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:23:24.122138 ignition[1178]: PUT result: OK May 17 00:23:24.122183 ignition[1178]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 17 00:23:24.122888 ignition[1178]: GET result: OK May 17 00:23:24.122953 ignition[1178]: parsing config with SHA512: 0d53eaf35f62f49b7ea5a13a8c12fe3a4df83d96d567ad9c33aa697d2f7b6138a7b401490fe1c59ccd3322b0a28628545dbbef3b0d138a8d60b79b0f6df6c2ce May 17 00:23:24.126696 unknown[1178]: fetched base config from "system" May 17 00:23:24.126705 unknown[1178]: fetched base config from "system" May 17 00:23:24.126710 unknown[1178]: fetched user config from "aws" May 17 00:23:24.129330 ignition[1178]: fetch: fetch complete May 17 00:23:24.129339 ignition[1178]: fetch: fetch passed May 17 00:23:24.129396 ignition[1178]: Ignition finished successfully May 17 00:23:24.131019 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:23:24.136667 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:23:24.150860 ignition[1185]: Ignition 2.19.0 May 17 00:23:24.150870 ignition[1185]: Stage: kargs May 17 00:23:24.151212 ignition[1185]: no configs at "/usr/lib/ignition/base.d" May 17 00:23:24.151222 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:23:24.151305 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:23:24.152104 ignition[1185]: PUT result: OK May 17 00:23:24.154496 ignition[1185]: kargs: kargs passed May 17 00:23:24.154555 ignition[1185]: Ignition finished successfully May 17 00:23:24.155986 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:23:24.161375 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:23:24.173709 ignition[1191]: Ignition 2.19.0 May 17 00:23:24.173721 ignition[1191]: Stage: disks May 17 00:23:24.174061 ignition[1191]: no configs at "/usr/lib/ignition/base.d" May 17 00:23:24.174074 ignition[1191]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:23:24.174166 ignition[1191]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:23:24.174931 ignition[1191]: PUT result: OK May 17 00:23:24.177587 ignition[1191]: disks: disks passed May 17 00:23:24.177643 ignition[1191]: Ignition finished successfully May 17 00:23:24.178819 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:23:24.179638 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:23:24.180222 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:23:24.180571 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:23:24.181095 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:23:24.181707 systemd[1]: Reached target basic.target - Basic System. May 17 00:23:24.187628 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:23:24.228877 systemd-fsck[1199]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:23:24.231629 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:23:24.235586 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:23:24.339454 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:23:24.339590 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:23:24.340671 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:23:24.362618 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:23:24.365615 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:23:24.366800 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:23:24.366869 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:23:24.366903 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:23:24.383909 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:23:24.387871 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1218) May 17 00:23:24.389915 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:23:24.393627 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:23:24.393652 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:23:24.393665 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:23:24.406543 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:23:24.407526 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:23:24.899631 initrd-setup-root[1242]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:23:24.916100 initrd-setup-root[1249]: cut: /sysroot/etc/group: No such file or directory May 17 00:23:24.920299 initrd-setup-root[1256]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:23:24.939699 initrd-setup-root[1263]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:23:25.233690 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:23:25.239549 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:23:25.242719 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:23:25.252447 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:23:25.253362 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:23:25.276450 ignition[1330]: INFO : Ignition 2.19.0 May 17 00:23:25.276450 ignition[1330]: INFO : Stage: mount May 17 00:23:25.278766 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:23:25.278766 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:23:25.278766 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:23:25.281755 ignition[1330]: INFO : PUT result: OK May 17 00:23:25.284363 ignition[1330]: INFO : mount: mount passed May 17 00:23:25.285976 ignition[1330]: INFO : Ignition finished successfully May 17 00:23:25.286782 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:23:25.292558 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:23:25.301948 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:23:25.318727 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:23:25.340456 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1343) May 17 00:23:25.344458 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:23:25.344519 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:23:25.344533 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:23:25.351457 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:23:25.353642 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:23:25.379532 ignition[1359]: INFO : Ignition 2.19.0 May 17 00:23:25.380252 ignition[1359]: INFO : Stage: files May 17 00:23:25.380818 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:23:25.380818 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:23:25.381804 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:23:25.382445 ignition[1359]: INFO : PUT result: OK May 17 00:23:25.385416 ignition[1359]: DEBUG : files: compiled without relabeling support, skipping May 17 00:23:25.399765 ignition[1359]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:23:25.399765 ignition[1359]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:23:25.417645 ignition[1359]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:23:25.418389 ignition[1359]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:23:25.418389 ignition[1359]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:23:25.418148 unknown[1359]: wrote ssh authorized keys file for user: core May 17 00:23:25.420324 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:23:25.420945 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:23:25.517007 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:23:25.705656 systemd-networkd[1169]: eth0: Gained IPv6LL May 17 00:23:25.771301 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:23:25.771301 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:23:25.773083 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:23:25.773083 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:23:25.773083 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:23:25.773083 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:23:25.773083 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:23:25.773083 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:23:25.773083 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:23:25.778475 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:23:25.778475 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:23:25.778475 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:23:25.778475 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:23:25.778475 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:23:25.778475 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:23:26.532090 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:23:27.453363 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:23:27.453363 ignition[1359]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:23:27.456011 ignition[1359]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:23:27.456011 ignition[1359]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:23:27.456011 ignition[1359]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:23:27.456011 ignition[1359]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:23:27.456011 ignition[1359]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:23:27.456011 ignition[1359]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:23:27.456011 ignition[1359]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:23:27.456011 ignition[1359]: INFO : files: files passed May 17 00:23:27.456011 ignition[1359]: INFO : Ignition finished successfully May 17 00:23:27.457108 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:23:27.461676 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:23:27.464561 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:23:27.468216 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:23:27.468702 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:23:27.480789 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:23:27.480789 initrd-setup-root-after-ignition[1389]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:23:27.482796 initrd-setup-root-after-ignition[1393]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:23:27.483009 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:23:27.484154 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:23:27.488577 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:23:27.516493 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:23:27.516663 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:23:27.517694 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:23:27.518629 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:23:27.519387 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:23:27.520571 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:23:27.537537 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:23:27.542652 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:23:27.553377 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:23:27.554042 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:23:27.554892 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:23:27.555594 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:23:27.555714 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:23:27.556695 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:23:27.557496 systemd[1]: Stopped target basic.target - Basic System. May 17 00:23:27.558169 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:23:27.558850 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:23:27.559516 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:23:27.560177 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:23:27.560888 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:23:27.561654 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:23:27.562636 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:23:27.563311 systemd[1]: Stopped target swap.target - Swaps. May 17 00:23:27.563960 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:23:27.564078 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:23:27.565068 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:23:27.565886 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:23:27.566561 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:23:27.567258 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:23:27.567741 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:23:27.567871 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:23:27.569101 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:23:27.569227 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:23:27.570014 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:23:27.570170 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:23:27.576739 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:23:27.580781 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:23:27.581394 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:23:27.582929 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:23:27.586446 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:23:27.587200 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:23:27.592526 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:23:27.592650 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:23:27.598499 ignition[1413]: INFO : Ignition 2.19.0 May 17 00:23:27.598499 ignition[1413]: INFO : Stage: umount May 17 00:23:27.601911 ignition[1413]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:23:27.601911 ignition[1413]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:23:27.601911 ignition[1413]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:23:27.601911 ignition[1413]: INFO : PUT result: OK May 17 00:23:27.606443 ignition[1413]: INFO : umount: umount passed May 17 00:23:27.606443 ignition[1413]: INFO : Ignition finished successfully May 17 00:23:27.607412 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:23:27.607599 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:23:27.608676 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:23:27.608747 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:23:27.609278 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:23:27.609340 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:23:27.609886 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:23:27.609950 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:23:27.610839 systemd[1]: Stopped target network.target - Network. May 17 00:23:27.611204 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:23:27.611266 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:23:27.612113 systemd[1]: Stopped target paths.target - Path Units. May 17 00:23:27.612941 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:23:27.617200 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:23:27.618495 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:23:27.618931 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:23:27.619395 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:23:27.619463 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:23:27.619905 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:23:27.619953 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:23:27.620367 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:23:27.620422 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:23:27.622212 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:23:27.622273 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:23:27.622939 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:23:27.623499 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:23:27.625686 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:23:27.631298 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:23:27.631481 systemd-networkd[1169]: eth0: DHCPv6 lease lost May 17 00:23:27.632647 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:23:27.633370 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:23:27.633595 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:23:27.635244 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:23:27.635420 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:23:27.638231 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:23:27.638295 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:23:27.639012 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:23:27.639078 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:23:27.647591 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:23:27.647965 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:23:27.648029 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:23:27.648451 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:23:27.648492 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:23:27.648820 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:23:27.648855 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:23:27.649202 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:23:27.649237 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:23:27.649944 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:23:27.660532 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:23:27.660647 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:23:27.663066 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:23:27.663204 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:23:27.664110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:23:27.664153 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:23:27.664826 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:23:27.664855 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:23:27.665475 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:23:27.665520 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:23:27.666613 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:23:27.666659 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:23:27.667740 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:23:27.667798 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:23:27.674626 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:23:27.675016 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:23:27.675075 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:23:27.675491 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:23:27.675529 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:23:27.675897 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:23:27.675935 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:23:27.676269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:23:27.676305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:27.680607 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:23:27.680691 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:23:27.682059 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:23:27.688580 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:23:27.695440 systemd[1]: Switching root. May 17 00:23:27.726310 systemd-journald[178]: Journal stopped May 17 00:23:29.462463 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). May 17 00:23:29.462522 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:23:29.462537 kernel: SELinux: policy capability open_perms=1 May 17 00:23:29.462553 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:23:29.462572 kernel: SELinux: policy capability always_check_network=0 May 17 00:23:29.462584 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:23:29.462596 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:23:29.465251 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:23:29.465269 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:23:29.465282 kernel: audit: type=1403 audit(1747441408.300:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:23:29.465295 systemd[1]: Successfully loaded SELinux policy in 121.681ms. May 17 00:23:29.465324 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.404ms. May 17 00:23:29.465342 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:23:29.465355 systemd[1]: Detected virtualization amazon. May 17 00:23:29.465368 systemd[1]: Detected architecture x86-64. May 17 00:23:29.465380 systemd[1]: Detected first boot. May 17 00:23:29.465403 systemd[1]: Initializing machine ID from VM UUID. May 17 00:23:29.465415 zram_generator::config[1455]: No configuration found. May 17 00:23:29.469503 systemd[1]: Populated /etc with preset unit settings. May 17 00:23:29.469524 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:23:29.469537 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:23:29.469556 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:23:29.469569 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:23:29.469582 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:23:29.469595 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:23:29.469607 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:23:29.469620 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:23:29.469633 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:23:29.469645 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:23:29.469660 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:23:29.469672 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:23:29.469690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:23:29.469703 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:23:29.469716 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:23:29.469729 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:23:29.469741 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:23:29.469759 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:23:29.469771 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:23:29.469786 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:23:29.469798 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:23:29.469811 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:23:29.469825 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:23:29.469838 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:23:29.469850 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:23:29.469862 systemd[1]: Reached target slices.target - Slice Units. May 17 00:23:29.469875 systemd[1]: Reached target swap.target - Swaps. May 17 00:23:29.469890 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:23:29.469903 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:23:29.469915 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:23:29.469927 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:23:29.469940 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:23:29.469952 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:23:29.469965 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:23:29.469978 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:23:29.469990 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:23:29.470006 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:29.470453 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:23:29.470475 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:23:29.470487 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:23:29.470500 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:23:29.470513 systemd[1]: Reached target machines.target - Containers. May 17 00:23:29.470526 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:23:29.470539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:23:29.470556 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:23:29.470569 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:23:29.470581 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:23:29.470594 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:23:29.470607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:23:29.470619 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:23:29.470632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:23:29.470673 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:23:29.470688 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:23:29.470703 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:23:29.470716 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:23:29.470728 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:23:29.470741 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:23:29.470753 kernel: fuse: init (API version 7.39) May 17 00:23:29.470766 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:23:29.470779 kernel: loop: module loaded May 17 00:23:29.470792 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:23:29.470805 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:23:29.470820 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:23:29.470833 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:23:29.470846 systemd[1]: Stopped verity-setup.service. May 17 00:23:29.470858 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:29.470871 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:23:29.470883 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:23:29.470897 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:23:29.470912 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:23:29.470925 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:23:29.470937 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:23:29.470949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:23:29.470962 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:23:29.471002 systemd-journald[1533]: Collecting audit messages is disabled. May 17 00:23:29.471029 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:23:29.471042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:23:29.471054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:23:29.471067 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:23:29.471085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:23:29.471097 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:23:29.471109 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:23:29.471123 systemd-journald[1533]: Journal started May 17 00:23:29.471152 systemd-journald[1533]: Runtime Journal (/run/log/journal/ec25a558bc2bc4ca2a5be9aad7ee2e2f) is 4.7M, max 38.2M, 33.4M free. May 17 00:23:29.167297 systemd[1]: Queued start job for default target multi-user.target. May 17 00:23:29.212742 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 17 00:23:29.213201 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:23:29.474202 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:23:29.473707 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:23:29.473839 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:23:29.475558 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:23:29.476209 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:23:29.476891 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:23:29.486675 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:23:29.512921 kernel: ACPI: bus type drm_connector registered May 17 00:23:29.501093 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:23:29.505448 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:23:29.505870 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:23:29.505903 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:23:29.507211 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:23:29.508632 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:23:29.512574 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:23:29.513616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:23:29.518575 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:23:29.526671 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:23:29.528526 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:23:29.529550 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:23:29.531521 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:23:29.539790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:23:29.542103 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:23:29.544719 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:23:29.546970 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:23:29.548971 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:23:29.554145 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:23:29.554293 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:23:29.570718 kernel: loop0: detected capacity change from 0 to 224512 May 17 00:23:29.563188 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:23:29.563897 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:23:29.565252 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:23:29.576703 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:23:29.578070 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:23:29.592142 systemd-journald[1533]: Time spent on flushing to /var/log/journal/ec25a558bc2bc4ca2a5be9aad7ee2e2f is 29.943ms for 989 entries. May 17 00:23:29.592142 systemd-journald[1533]: System Journal (/var/log/journal/ec25a558bc2bc4ca2a5be9aad7ee2e2f) is 8.0M, max 195.6M, 187.6M free. May 17 00:23:29.629052 systemd-journald[1533]: Received client request to flush runtime journal. May 17 00:23:29.602517 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:23:29.605669 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:23:29.611372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:23:29.630648 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:23:29.634090 systemd-tmpfiles[1583]: ACLs are not supported, ignoring. May 17 00:23:29.635659 systemd-tmpfiles[1583]: ACLs are not supported, ignoring. May 17 00:23:29.636304 udevadm[1595]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:23:29.642874 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:23:29.648731 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:23:29.659449 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:23:29.663050 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:23:29.664239 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:23:29.703473 kernel: loop1: detected capacity change from 0 to 142488 May 17 00:23:29.723329 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:23:29.731606 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:23:29.754618 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. May 17 00:23:29.754933 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. May 17 00:23:29.759268 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:23:29.855458 kernel: loop2: detected capacity change from 0 to 140768 May 17 00:23:29.995456 kernel: loop3: detected capacity change from 0 to 61336 May 17 00:23:30.103448 kernel: loop4: detected capacity change from 0 to 224512 May 17 00:23:30.153495 kernel: loop5: detected capacity change from 0 to 142488 May 17 00:23:30.184570 kernel: loop6: detected capacity change from 0 to 140768 May 17 00:23:30.216456 kernel: loop7: detected capacity change from 0 to 61336 May 17 00:23:30.223620 (sd-merge)[1612]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 17 00:23:30.224102 (sd-merge)[1612]: Merged extensions into '/usr'. May 17 00:23:30.231166 systemd[1]: Reloading requested from client PID 1582 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:23:30.231312 systemd[1]: Reloading... May 17 00:23:30.305479 zram_generator::config[1639]: No configuration found. May 17 00:23:30.439816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:30.509109 systemd[1]: Reloading finished in 277 ms. May 17 00:23:30.537202 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:23:30.538890 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:23:30.548138 systemd[1]: Starting ensure-sysext.service... May 17 00:23:30.549806 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:23:30.553341 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:23:30.561395 systemd[1]: Reloading requested from client PID 1691 ('systemctl') (unit ensure-sysext.service)... May 17 00:23:30.561411 systemd[1]: Reloading... May 17 00:23:30.592799 systemd-udevd[1693]: Using default interface naming scheme 'v255'. May 17 00:23:30.593798 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:23:30.594381 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:23:30.595258 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:23:30.595535 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. May 17 00:23:30.595600 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. May 17 00:23:30.600801 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:23:30.600813 systemd-tmpfiles[1692]: Skipping /boot May 17 00:23:30.616274 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:23:30.616673 systemd-tmpfiles[1692]: Skipping /boot May 17 00:23:30.653474 zram_generator::config[1718]: No configuration found. May 17 00:23:30.767419 (udev-worker)[1765]: Network interface NamePolicy= disabled on kernel command line. May 17 00:23:30.830838 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 17 00:23:30.841446 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 00:23:30.853450 kernel: ACPI: button: Power Button [PWRF] May 17 00:23:30.858163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:30.859451 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 May 17 00:23:30.877446 kernel: ACPI: button: Sleep Button [SLPF] May 17 00:23:30.885466 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 May 17 00:23:30.929035 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:23:30.929716 systemd[1]: Reloading finished in 367 ms. May 17 00:23:30.939453 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1746) May 17 00:23:30.952451 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:23:30.948033 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:23:30.948951 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:23:30.995090 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:23:30.999122 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:23:31.003285 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:23:31.011505 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:23:31.017419 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:23:31.022733 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:23:31.032513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:23:31.046175 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:31.046415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:23:31.056128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:23:31.058509 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:23:31.067743 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:23:31.068226 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:23:31.068399 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:31.074332 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:23:31.077533 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:31.077717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:23:31.077862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:23:31.077943 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:31.082402 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:31.083324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:23:31.090709 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:23:31.091204 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:23:31.091377 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:23:31.091849 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:31.106231 systemd[1]: Finished ensure-sysext.service. May 17 00:23:31.107463 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:23:31.112623 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:23:31.114258 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:23:31.115488 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:23:31.139062 augenrules[1909]: No rules May 17 00:23:31.141143 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:23:31.145814 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:23:31.145948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:23:31.146629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:23:31.146747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:23:31.153541 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:23:31.154712 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:23:31.160066 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:23:31.160768 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:23:31.183296 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:23:31.192316 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:23:31.196735 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:23:31.206797 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:23:31.208803 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:23:31.211276 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:23:31.212321 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:23:31.216828 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:23:31.238265 lvm[1926]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:23:31.261612 ldconfig[1574]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:23:31.272229 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:23:31.282618 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:23:31.283698 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:23:31.284780 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:23:31.301755 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:23:31.302527 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:31.303164 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:23:31.309755 lvm[1941]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:23:31.311093 systemd-resolved[1843]: Positive Trust Anchors: May 17 00:23:31.311112 systemd-resolved[1843]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:23:31.311148 systemd-resolved[1843]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:23:31.319358 systemd-resolved[1843]: Defaulting to hostname 'linux'. May 17 00:23:31.321632 systemd-networkd[1837]: lo: Link UP May 17 00:23:31.321768 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:23:31.321920 systemd-networkd[1837]: lo: Gained carrier May 17 00:23:31.322359 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:23:31.322768 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:23:31.323177 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:23:31.323540 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:23:31.323996 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:23:31.324378 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:23:31.324721 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:23:31.325021 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:23:31.325046 systemd[1]: Reached target paths.target - Path Units. May 17 00:23:31.325345 systemd[1]: Reached target timers.target - Timer Units. May 17 00:23:31.325638 systemd-networkd[1837]: Enumeration completed May 17 00:23:31.326040 systemd-networkd[1837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:23:31.326106 systemd-networkd[1837]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:23:31.327204 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:23:31.328930 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:23:31.331590 systemd-networkd[1837]: eth0: Link UP May 17 00:23:31.331807 systemd-networkd[1837]: eth0: Gained carrier May 17 00:23:31.331831 systemd-networkd[1837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:23:31.334911 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:23:31.335887 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:23:31.336580 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:23:31.337532 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:23:31.338503 systemd[1]: Reached target network.target - Network. May 17 00:23:31.338867 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:23:31.339192 systemd[1]: Reached target basic.target - Basic System. May 17 00:23:31.339581 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:23:31.339613 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:23:31.340666 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:23:31.341714 systemd-networkd[1837]: eth0: DHCPv4 address 172.31.18.208/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:23:31.343485 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:23:31.347578 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:23:31.348931 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:23:31.352404 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:23:31.352791 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:23:31.354610 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:23:31.357622 systemd[1]: Started ntpd.service - Network Time Service. May 17 00:23:31.359574 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:23:31.361470 jq[1951]: false May 17 00:23:31.364535 systemd[1]: Starting setup-oem.service - Setup OEM... May 17 00:23:31.366555 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:23:31.369803 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:23:31.379603 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:23:31.382092 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:23:31.382831 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:23:31.383247 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:23:31.390660 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:23:31.394565 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:23:31.397759 extend-filesystems[1952]: Found loop4 May 17 00:23:31.397759 extend-filesystems[1952]: Found loop5 May 17 00:23:31.397759 extend-filesystems[1952]: Found loop6 May 17 00:23:31.397759 extend-filesystems[1952]: Found loop7 May 17 00:23:31.415930 extend-filesystems[1952]: Found nvme0n1 May 17 00:23:31.415930 extend-filesystems[1952]: Found nvme0n1p1 May 17 00:23:31.415930 extend-filesystems[1952]: Found nvme0n1p2 May 17 00:23:31.415930 extend-filesystems[1952]: Found nvme0n1p3 May 17 00:23:31.415930 extend-filesystems[1952]: Found usr May 17 00:23:31.415930 extend-filesystems[1952]: Found nvme0n1p4 May 17 00:23:31.415930 extend-filesystems[1952]: Found nvme0n1p6 May 17 00:23:31.415930 extend-filesystems[1952]: Found nvme0n1p7 May 17 00:23:31.415930 extend-filesystems[1952]: Found nvme0n1p9 May 17 00:23:31.415930 extend-filesystems[1952]: Checking size of /dev/nvme0n1p9 May 17 00:23:31.398894 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:23:31.399053 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:23:31.447215 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:23:31.447479 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:23:31.472399 jq[1963]: true May 17 00:23:31.481400 update_engine[1962]: I20250517 00:23:31.480942 1962 main.cc:92] Flatcar Update Engine starting May 17 00:23:31.486151 extend-filesystems[1952]: Resized partition /dev/nvme0n1p9 May 17 00:23:31.501656 extend-filesystems[1989]: resize2fs 1.47.1 (20-May-2024) May 17 00:23:31.502499 (ntainerd)[1976]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:23:31.510249 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:23:31.510720 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:23:31.511274 jq[1982]: true May 17 00:23:31.527067 tar[1965]: linux-amd64/LICENSE May 17 00:23:31.527496 tar[1965]: linux-amd64/helm May 17 00:23:31.532976 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 17 00:23:31.536656 dbus-daemon[1950]: [system] SELinux support is enabled May 17 00:23:31.539721 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:23:31.548047 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:23:31.548098 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:23:31.548658 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:23:31.548691 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:23:31.573803 dbus-daemon[1950]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1837 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:23:31.584112 update_engine[1962]: I20250517 00:23:31.584046 1962 update_check_scheduler.cc:74] Next update check in 7m29s May 17 00:23:31.588762 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 17 00:23:31.589332 systemd[1]: Started update-engine.service - Update Engine. May 17 00:23:31.603145 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:23:31.623316 ntpd[1954]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:07:47 UTC 2025 (1): Starting May 17 00:23:31.623352 ntpd[1954]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:23:31.623778 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:07:47 UTC 2025 (1): Starting May 17 00:23:31.623778 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:23:31.623778 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: ---------------------------------------------------- May 17 00:23:31.623778 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: ntp-4 is maintained by Network Time Foundation, May 17 00:23:31.623778 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:23:31.623778 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: corporation. Support and training for ntp-4 are May 17 00:23:31.623778 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: available at https://www.nwtime.org/support May 17 00:23:31.623778 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: ---------------------------------------------------- May 17 00:23:31.623363 ntpd[1954]: ---------------------------------------------------- May 17 00:23:31.623372 ntpd[1954]: ntp-4 is maintained by Network Time Foundation, May 17 00:23:31.623381 ntpd[1954]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:23:31.623390 ntpd[1954]: corporation. Support and training for ntp-4 are May 17 00:23:31.623400 ntpd[1954]: available at https://www.nwtime.org/support May 17 00:23:31.623409 ntpd[1954]: ---------------------------------------------------- May 17 00:23:31.630261 ntpd[1954]: proto: precision = 0.082 usec (-23) May 17 00:23:31.630385 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: proto: precision = 0.082 usec (-23) May 17 00:23:31.636535 systemd[1]: Finished setup-oem.service - Setup OEM. May 17 00:23:31.638137 ntpd[1954]: basedate set to 2025-05-04 May 17 00:23:31.638310 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: basedate set to 2025-05-04 May 17 00:23:31.638310 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: gps base set to 2025-05-04 (week 2365) May 17 00:23:31.638165 ntpd[1954]: gps base set to 2025-05-04 (week 2365) May 17 00:23:31.641948 systemd-logind[1960]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:23:31.641985 systemd-logind[1960]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 00:23:31.642007 systemd-logind[1960]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:23:31.647804 systemd-logind[1960]: New seat seat0. May 17 00:23:31.659682 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:23:31.659682 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:23:31.656932 ntpd[1954]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:23:31.656989 ntpd[1954]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:23:31.660440 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:23:31.665458 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:23:31.665458 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: Listen normally on 3 eth0 172.31.18.208:123 May 17 00:23:31.663781 ntpd[1954]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:23:31.663827 ntpd[1954]: Listen normally on 3 eth0 172.31.18.208:123 May 17 00:23:31.666258 ntpd[1954]: Listen normally on 4 lo [::1]:123 May 17 00:23:31.666533 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: Listen normally on 4 lo [::1]:123 May 17 00:23:31.666533 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: bind(21) AF_INET6 fe80::47e:92ff:fe8d:f405%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:23:31.666326 ntpd[1954]: bind(21) AF_INET6 fe80::47e:92ff:fe8d:f405%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:23:31.718349 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 17 00:23:31.718404 coreos-metadata[1949]: May 17 00:23:31.690 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:23:31.718404 coreos-metadata[1949]: May 17 00:23:31.691 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 17 00:23:31.718404 coreos-metadata[1949]: May 17 00:23:31.691 INFO Fetch successful May 17 00:23:31.718404 coreos-metadata[1949]: May 17 00:23:31.691 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 17 00:23:31.718404 coreos-metadata[1949]: May 17 00:23:31.693 INFO Fetch successful May 17 00:23:31.718404 coreos-metadata[1949]: May 17 00:23:31.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 17 00:23:31.718404 coreos-metadata[1949]: May 17 00:23:31.693 INFO Fetch successful May 17 00:23:31.718404 coreos-metadata[1949]: May 17 00:23:31.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 17 00:23:31.667688 ntpd[1954]: unable to create socket on eth0 (5) for fe80::47e:92ff:fe8d:f405%2#123 May 17 00:23:31.719078 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: unable to create socket on eth0 (5) for fe80::47e:92ff:fe8d:f405%2#123 May 17 00:23:31.719078 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: failed to init interface for address fe80::47e:92ff:fe8d:f405%2 May 17 00:23:31.719078 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: Listening on routing socket on fd #21 for interface updates May 17 00:23:31.719078 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:23:31.719078 ntpd[1954]: 17 May 00:23:31 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:23:31.667711 ntpd[1954]: failed to init interface for address fe80::47e:92ff:fe8d:f405%2 May 17 00:23:31.722596 coreos-metadata[1949]: May 17 00:23:31.720 INFO Fetch successful May 17 00:23:31.722596 coreos-metadata[1949]: May 17 00:23:31.720 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 17 00:23:31.667762 ntpd[1954]: Listening on routing socket on fd #21 for interface updates May 17 00:23:31.695495 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:23:31.695532 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:23:31.733369 coreos-metadata[1949]: May 17 00:23:31.723 INFO Fetch failed with 404: resource not found May 17 00:23:31.733369 coreos-metadata[1949]: May 17 00:23:31.723 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 17 00:23:31.733369 coreos-metadata[1949]: May 17 00:23:31.728 INFO Fetch successful May 17 00:23:31.733369 coreos-metadata[1949]: May 17 00:23:31.728 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 17 00:23:31.733645 extend-filesystems[1989]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:23:31.733645 extend-filesystems[1989]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:23:31.733645 extend-filesystems[1989]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 17 00:23:31.731189 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:23:31.755262 bash[2025]: Updated "/home/core/.ssh/authorized_keys" May 17 00:23:31.755393 coreos-metadata[1949]: May 17 00:23:31.737 INFO Fetch successful May 17 00:23:31.755393 coreos-metadata[1949]: May 17 00:23:31.737 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 17 00:23:31.755393 coreos-metadata[1949]: May 17 00:23:31.740 INFO Fetch successful May 17 00:23:31.755393 coreos-metadata[1949]: May 17 00:23:31.740 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 17 00:23:31.755393 coreos-metadata[1949]: May 17 00:23:31.745 INFO Fetch successful May 17 00:23:31.755393 coreos-metadata[1949]: May 17 00:23:31.745 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 17 00:23:31.767025 extend-filesystems[1952]: Resized filesystem in /dev/nvme0n1p9 May 17 00:23:31.743730 systemd[1]: Starting sshkeys.service... May 17 00:23:31.774605 coreos-metadata[1949]: May 17 00:23:31.756 INFO Fetch successful May 17 00:23:31.750135 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:23:31.750379 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:23:31.821090 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1765) May 17 00:23:31.826666 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:23:31.834881 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:23:31.898269 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:23:31.899935 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:23:31.948965 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:23:31.949288 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 17 00:23:31.956671 dbus-daemon[1950]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2002 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:23:31.979297 systemd[1]: Starting polkit.service - Authorization Manager... May 17 00:23:32.020461 coreos-metadata[2034]: May 17 00:23:32.017 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:23:32.023787 coreos-metadata[2034]: May 17 00:23:32.023 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 17 00:23:32.024518 coreos-metadata[2034]: May 17 00:23:32.024 INFO Fetch successful May 17 00:23:32.024518 coreos-metadata[2034]: May 17 00:23:32.024 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 17 00:23:32.025273 coreos-metadata[2034]: May 17 00:23:32.025 INFO Fetch successful May 17 00:23:32.026845 polkitd[2079]: Started polkitd version 121 May 17 00:23:32.032351 unknown[2034]: wrote ssh authorized keys file for user: core May 17 00:23:32.038947 sshd_keygen[1995]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:23:32.036961 polkitd[2079]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:23:32.037034 polkitd[2079]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:23:32.044259 polkitd[2079]: Finished loading, compiling and executing 2 rules May 17 00:23:32.046815 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:23:32.047250 polkitd[2079]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:23:32.059214 systemd[1]: Started polkit.service - Authorization Manager. May 17 00:23:32.100207 update-ssh-keys[2092]: Updated "/home/core/.ssh/authorized_keys" May 17 00:23:32.105342 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:23:32.111250 systemd[1]: Finished sshkeys.service. May 17 00:23:32.161977 systemd-hostnamed[2002]: Hostname set to (transient) May 17 00:23:32.162248 systemd-resolved[1843]: System hostname changed to 'ip-172-31-18-208'. May 17 00:23:32.193642 locksmithd[2005]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:23:32.198973 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:23:32.212802 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:23:32.256625 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:23:32.256976 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:23:32.268856 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:23:32.312382 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:23:32.352617 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:23:32.363270 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:23:32.364814 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:23:32.424822 containerd[1976]: time="2025-05-17T00:23:32.424685539Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:23:32.473649 containerd[1976]: time="2025-05-17T00:23:32.473597453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:23:32.476890 containerd[1976]: time="2025-05-17T00:23:32.476826875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:23:32.477038 containerd[1976]: time="2025-05-17T00:23:32.477019881Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:23:32.477176 containerd[1976]: time="2025-05-17T00:23:32.477105745Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:23:32.477525 containerd[1976]: time="2025-05-17T00:23:32.477420177Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:23:32.477525 containerd[1976]: time="2025-05-17T00:23:32.477477534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:23:32.477879 containerd[1976]: time="2025-05-17T00:23:32.477727485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:23:32.477879 containerd[1976]: time="2025-05-17T00:23:32.477764120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:23:32.478339 containerd[1976]: time="2025-05-17T00:23:32.478186969Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:23:32.478339 containerd[1976]: time="2025-05-17T00:23:32.478213142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:23:32.478339 containerd[1976]: time="2025-05-17T00:23:32.478236798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:23:32.478339 containerd[1976]: time="2025-05-17T00:23:32.478273159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:23:32.478782 containerd[1976]: time="2025-05-17T00:23:32.478631379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:23:32.479253 containerd[1976]: time="2025-05-17T00:23:32.479049091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:23:32.479336 containerd[1976]: time="2025-05-17T00:23:32.479232289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:23:32.479401 containerd[1976]: time="2025-05-17T00:23:32.479387456Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:23:32.479674 containerd[1976]: time="2025-05-17T00:23:32.479580611Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:23:32.479674 containerd[1976]: time="2025-05-17T00:23:32.479641020Z" level=info msg="metadata content store policy set" policy=shared May 17 00:23:32.485545 containerd[1976]: time="2025-05-17T00:23:32.485351638Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:23:32.485545 containerd[1976]: time="2025-05-17T00:23:32.485495606Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:23:32.485988 containerd[1976]: time="2025-05-17T00:23:32.485523632Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:23:32.485988 containerd[1976]: time="2025-05-17T00:23:32.485757915Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:23:32.485988 containerd[1976]: time="2025-05-17T00:23:32.485782512Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:23:32.485988 containerd[1976]: time="2025-05-17T00:23:32.485936039Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.486712910Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.486851605Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.486876812Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.486896711Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.486916864Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.486937257Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.486956434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.486977097Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.486998323Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.487017443Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.487036394Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.487054132Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.487092307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487444 containerd[1976]: time="2025-05-17T00:23:32.487113329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487133725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487159611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487178919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487205052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487222930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487242901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487262355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487282826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487302747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487320047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487339364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487368402Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:23:32.487974 containerd[1976]: time="2025-05-17T00:23:32.487400978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:23:32.489882 containerd[1976]: time="2025-05-17T00:23:32.487419815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:23:32.489882 containerd[1976]: time="2025-05-17T00:23:32.488491864Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:23:32.489882 containerd[1976]: time="2025-05-17T00:23:32.488574108Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:23:32.489882 containerd[1976]: time="2025-05-17T00:23:32.488602913Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:23:32.489882 containerd[1976]: time="2025-05-17T00:23:32.488697277Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:23:32.489882 containerd[1976]: time="2025-05-17T00:23:32.488718857Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:23:32.489882 containerd[1976]: time="2025-05-17T00:23:32.488734045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:23:32.489882 containerd[1976]: time="2025-05-17T00:23:32.488752801Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:23:32.489882 containerd[1976]: time="2025-05-17T00:23:32.488767554Z" level=info msg="NRI interface is disabled by configuration." May 17 00:23:32.489882 containerd[1976]: time="2025-05-17T00:23:32.488782369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:23:32.490331 containerd[1976]: time="2025-05-17T00:23:32.489182572Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:23:32.490331 containerd[1976]: time="2025-05-17T00:23:32.489274378Z" level=info msg="Connect containerd service" May 17 00:23:32.490331 containerd[1976]: time="2025-05-17T00:23:32.489319491Z" level=info msg="using legacy CRI server" May 17 00:23:32.490331 containerd[1976]: time="2025-05-17T00:23:32.489330512Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:23:32.490331 containerd[1976]: time="2025-05-17T00:23:32.489498131Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:23:32.491204 containerd[1976]: time="2025-05-17T00:23:32.491172771Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:23:32.491664 containerd[1976]: time="2025-05-17T00:23:32.491643232Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:23:32.491793 containerd[1976]: time="2025-05-17T00:23:32.491777515Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:23:32.493726 containerd[1976]: time="2025-05-17T00:23:32.491894063Z" level=info msg="Start subscribing containerd event" May 17 00:23:32.493726 containerd[1976]: time="2025-05-17T00:23:32.491953207Z" level=info msg="Start recovering state" May 17 00:23:32.493726 containerd[1976]: time="2025-05-17T00:23:32.492025368Z" level=info msg="Start event monitor" May 17 00:23:32.493726 containerd[1976]: time="2025-05-17T00:23:32.492046749Z" level=info msg="Start snapshots syncer" May 17 00:23:32.493726 containerd[1976]: time="2025-05-17T00:23:32.492058860Z" level=info msg="Start cni network conf syncer for default" May 17 00:23:32.493726 containerd[1976]: time="2025-05-17T00:23:32.492069354Z" level=info msg="Start streaming server" May 17 00:23:32.492214 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:23:32.495457 containerd[1976]: time="2025-05-17T00:23:32.494083520Z" level=info msg="containerd successfully booted in 0.071057s" May 17 00:23:32.623775 ntpd[1954]: bind(24) AF_INET6 fe80::47e:92ff:fe8d:f405%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:23:32.623840 ntpd[1954]: unable to create socket on eth0 (6) for fe80::47e:92ff:fe8d:f405%2#123 May 17 00:23:32.624188 ntpd[1954]: 17 May 00:23:32 ntpd[1954]: bind(24) AF_INET6 fe80::47e:92ff:fe8d:f405%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:23:32.624188 ntpd[1954]: 17 May 00:23:32 ntpd[1954]: unable to create socket on eth0 (6) for fe80::47e:92ff:fe8d:f405%2#123 May 17 00:23:32.624188 ntpd[1954]: 17 May 00:23:32 ntpd[1954]: failed to init interface for address fe80::47e:92ff:fe8d:f405%2 May 17 00:23:32.623856 ntpd[1954]: failed to init interface for address fe80::47e:92ff:fe8d:f405%2 May 17 00:23:32.643979 tar[1965]: linux-amd64/README.md May 17 00:23:32.654706 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:23:33.385760 systemd-networkd[1837]: eth0: Gained IPv6LL May 17 00:23:33.388318 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:23:33.389233 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:23:33.396670 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 17 00:23:33.403504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:33.407510 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:23:33.443066 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:23:33.462882 amazon-ssm-agent[2175]: Initializing new seelog logger May 17 00:23:33.462882 amazon-ssm-agent[2175]: New Seelog Logger Creation Complete May 17 00:23:33.462882 amazon-ssm-agent[2175]: 2025/05/17 00:23:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:23:33.462882 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:23:33.463278 amazon-ssm-agent[2175]: 2025/05/17 00:23:33 processing appconfig overrides May 17 00:23:33.463889 amazon-ssm-agent[2175]: 2025/05/17 00:23:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:23:33.463889 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:23:33.463889 amazon-ssm-agent[2175]: 2025/05/17 00:23:33 processing appconfig overrides May 17 00:23:33.463889 amazon-ssm-agent[2175]: 2025/05/17 00:23:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:23:33.463889 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:23:33.463889 amazon-ssm-agent[2175]: 2025/05/17 00:23:33 processing appconfig overrides May 17 00:23:33.464230 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO Proxy environment variables: May 17 00:23:33.467103 amazon-ssm-agent[2175]: 2025/05/17 00:23:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:23:33.467103 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:23:33.467221 amazon-ssm-agent[2175]: 2025/05/17 00:23:33 processing appconfig overrides May 17 00:23:33.563597 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO http_proxy: May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO no_proxy: May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO https_proxy: May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO Checking if agent identity type OnPrem can be assumed May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO Checking if agent identity type EC2 can be assumed May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO Agent will take identity from EC2 May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [amazon-ssm-agent] Starting Core Agent May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [Registrar] Starting registrar module May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [EC2Identity] EC2 registration was successful. May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [CredentialRefresher] credentialRefresher has started May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [CredentialRefresher] Starting credentials refresher loop May 17 00:23:33.648485 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 17 00:23:33.661116 amazon-ssm-agent[2175]: 2025-05-17 00:23:33 INFO [CredentialRefresher] Next credential rotation will be in 31.199993819216665 minutes May 17 00:23:34.434817 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:23:34.440239 systemd[1]: Started sshd@0-172.31.18.208:22-147.75.109.163:59392.service - OpenSSH per-connection server daemon (147.75.109.163:59392). May 17 00:23:34.663346 amazon-ssm-agent[2175]: 2025-05-17 00:23:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 17 00:23:34.679135 sshd[2195]: Accepted publickey for core from 147.75.109.163 port 59392 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:23:34.684200 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:34.700735 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:23:34.712549 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:23:34.718142 systemd-logind[1960]: New session 1 of user core. May 17 00:23:34.742065 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:23:34.752841 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:23:34.761055 (systemd)[2205]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:23:34.763716 amazon-ssm-agent[2175]: 2025-05-17 00:23:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2198) started May 17 00:23:34.866052 amazon-ssm-agent[2175]: 2025-05-17 00:23:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 17 00:23:34.907325 systemd[2205]: Queued start job for default target default.target. May 17 00:23:34.915541 systemd[2205]: Created slice app.slice - User Application Slice. May 17 00:23:34.915573 systemd[2205]: Reached target paths.target - Paths. May 17 00:23:34.915587 systemd[2205]: Reached target timers.target - Timers. May 17 00:23:34.916806 systemd[2205]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:23:34.929468 systemd[2205]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:23:34.929613 systemd[2205]: Reached target sockets.target - Sockets. May 17 00:23:34.929629 systemd[2205]: Reached target basic.target - Basic System. May 17 00:23:34.929670 systemd[2205]: Reached target default.target - Main User Target. May 17 00:23:34.929701 systemd[2205]: Startup finished in 156ms. May 17 00:23:34.929811 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:23:34.935617 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:23:35.026605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:35.028028 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:23:35.029379 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:23:35.030062 systemd[1]: Startup finished in 639ms (kernel) + 7.526s (initrd) + 6.848s (userspace) = 15.014s. May 17 00:23:35.076045 systemd[1]: Started sshd@1-172.31.18.208:22-147.75.109.163:59406.service - OpenSSH per-connection server daemon (147.75.109.163:59406). May 17 00:23:35.234656 sshd[2231]: Accepted publickey for core from 147.75.109.163 port 59406 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:23:35.236095 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:35.240372 systemd-logind[1960]: New session 2 of user core. May 17 00:23:35.242572 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:23:35.369181 sshd[2231]: pam_unix(sshd:session): session closed for user core May 17 00:23:35.373436 systemd[1]: sshd@1-172.31.18.208:22-147.75.109.163:59406.service: Deactivated successfully. May 17 00:23:35.376045 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:23:35.377024 systemd-logind[1960]: Session 2 logged out. Waiting for processes to exit. May 17 00:23:35.378542 systemd-logind[1960]: Removed session 2. May 17 00:23:35.398459 systemd[1]: Started sshd@2-172.31.18.208:22-147.75.109.163:59412.service - OpenSSH per-connection server daemon (147.75.109.163:59412). May 17 00:23:35.562729 sshd[2242]: Accepted publickey for core from 147.75.109.163 port 59412 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:23:35.564040 sshd[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:35.568563 systemd-logind[1960]: New session 3 of user core. May 17 00:23:35.579626 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:23:35.623775 ntpd[1954]: Listen normally on 7 eth0 [fe80::47e:92ff:fe8d:f405%2]:123 May 17 00:23:35.624156 ntpd[1954]: 17 May 00:23:35 ntpd[1954]: Listen normally on 7 eth0 [fe80::47e:92ff:fe8d:f405%2]:123 May 17 00:23:35.693121 sshd[2242]: pam_unix(sshd:session): session closed for user core May 17 00:23:35.696282 systemd[1]: sshd@2-172.31.18.208:22-147.75.109.163:59412.service: Deactivated successfully. May 17 00:23:35.697789 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:23:35.699032 systemd-logind[1960]: Session 3 logged out. Waiting for processes to exit. May 17 00:23:35.700064 systemd-logind[1960]: Removed session 3. May 17 00:23:35.728692 systemd[1]: Started sshd@3-172.31.18.208:22-147.75.109.163:59414.service - OpenSSH per-connection server daemon (147.75.109.163:59414). May 17 00:23:35.884058 sshd[2250]: Accepted publickey for core from 147.75.109.163 port 59414 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:23:35.885896 sshd[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:35.890047 systemd-logind[1960]: New session 4 of user core. May 17 00:23:35.899610 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:23:35.918038 kubelet[2224]: E0517 00:23:35.917983 2224 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:23:35.920309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:23:35.920530 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:36.015657 sshd[2250]: pam_unix(sshd:session): session closed for user core May 17 00:23:36.018254 systemd[1]: sshd@3-172.31.18.208:22-147.75.109.163:59414.service: Deactivated successfully. May 17 00:23:36.020498 systemd-logind[1960]: Session 4 logged out. Waiting for processes to exit. May 17 00:23:36.021013 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:23:36.021809 systemd-logind[1960]: Removed session 4. May 17 00:23:36.051553 systemd[1]: Started sshd@4-172.31.18.208:22-147.75.109.163:59426.service - OpenSSH per-connection server daemon (147.75.109.163:59426). May 17 00:23:36.218020 sshd[2258]: Accepted publickey for core from 147.75.109.163 port 59426 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:23:36.219319 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:36.223527 systemd-logind[1960]: New session 5 of user core. May 17 00:23:36.233641 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:23:36.383957 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:23:36.384248 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:36.397807 sudo[2261]: pam_unix(sudo:session): session closed for user root May 17 00:23:36.421120 sshd[2258]: pam_unix(sshd:session): session closed for user core May 17 00:23:36.424394 systemd[1]: sshd@4-172.31.18.208:22-147.75.109.163:59426.service: Deactivated successfully. May 17 00:23:36.426173 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:23:36.427343 systemd-logind[1960]: Session 5 logged out. Waiting for processes to exit. May 17 00:23:36.428281 systemd-logind[1960]: Removed session 5. May 17 00:23:36.452313 systemd[1]: Started sshd@5-172.31.18.208:22-147.75.109.163:35494.service - OpenSSH per-connection server daemon (147.75.109.163:35494). May 17 00:23:36.612140 sshd[2266]: Accepted publickey for core from 147.75.109.163 port 35494 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:23:36.613138 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:36.617514 systemd-logind[1960]: New session 6 of user core. May 17 00:23:36.625673 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:23:36.722849 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:23:36.723126 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:36.726496 sudo[2270]: pam_unix(sudo:session): session closed for user root May 17 00:23:36.731695 sudo[2269]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:23:36.731972 sudo[2269]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:36.744694 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:23:36.747476 auditctl[2273]: No rules May 17 00:23:36.747792 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:23:36.747958 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:23:36.750147 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:23:36.777601 augenrules[2291]: No rules May 17 00:23:36.778246 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:23:36.779456 sudo[2269]: pam_unix(sudo:session): session closed for user root May 17 00:23:36.801989 sshd[2266]: pam_unix(sshd:session): session closed for user core May 17 00:23:36.804623 systemd[1]: sshd@5-172.31.18.208:22-147.75.109.163:35494.service: Deactivated successfully. May 17 00:23:36.806710 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:23:36.807488 systemd-logind[1960]: Session 6 logged out. Waiting for processes to exit. May 17 00:23:36.808231 systemd-logind[1960]: Removed session 6. May 17 00:23:36.829723 systemd[1]: Started sshd@6-172.31.18.208:22-147.75.109.163:35508.service - OpenSSH per-connection server daemon (147.75.109.163:35508). May 17 00:23:36.987999 sshd[2299]: Accepted publickey for core from 147.75.109.163 port 35508 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:23:36.988841 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:36.993274 systemd-logind[1960]: New session 7 of user core. May 17 00:23:37.006665 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:23:37.103898 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:23:37.104178 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:37.621838 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:23:37.621960 (dockerd)[2317]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:23:38.240063 dockerd[2317]: time="2025-05-17T00:23:38.240007674Z" level=info msg="Starting up" May 17 00:23:38.573541 systemd[1]: var-lib-docker-metacopy\x2dcheck529894630-merged.mount: Deactivated successfully. May 17 00:23:38.590326 dockerd[2317]: time="2025-05-17T00:23:38.590277331Z" level=info msg="Loading containers: start." May 17 00:23:40.263086 systemd-resolved[1843]: Clock change detected. Flushing caches. May 17 00:23:40.331221 ntpd[1954]: receive: Unexpected origin timestamp 0xebd25589.9fa40233 does not match aorg 0000000000.00000000 from server@209.51.161.238 xmt 0xebd2558c.4d93fe3d May 17 00:23:40.331597 ntpd[1954]: 17 May 00:23:40 ntpd[1954]: receive: Unexpected origin timestamp 0xebd25589.9fa40233 does not match aorg 0000000000.00000000 from server@209.51.161.238 xmt 0xebd2558c.4d93fe3d May 17 00:23:40.360651 kernel: Initializing XFRM netlink socket May 17 00:23:40.400346 (udev-worker)[2341]: Network interface NamePolicy= disabled on kernel command line. May 17 00:23:40.455005 systemd-networkd[1837]: docker0: Link UP May 17 00:23:40.497257 dockerd[2317]: time="2025-05-17T00:23:40.496903920Z" level=info msg="Loading containers: done." May 17 00:23:40.529522 dockerd[2317]: time="2025-05-17T00:23:40.529470926Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:23:40.529798 dockerd[2317]: time="2025-05-17T00:23:40.529614658Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:23:40.529798 dockerd[2317]: time="2025-05-17T00:23:40.529754372Z" level=info msg="Daemon has completed initialization" May 17 00:23:40.561630 dockerd[2317]: time="2025-05-17T00:23:40.561519786Z" level=info msg="API listen on /run/docker.sock" May 17 00:23:40.561651 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:23:41.661511 containerd[1976]: time="2025-05-17T00:23:41.661468372Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:23:42.243487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712673072.mount: Deactivated successfully. May 17 00:23:43.878834 containerd[1976]: time="2025-05-17T00:23:43.878780440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:43.880128 containerd[1976]: time="2025-05-17T00:23:43.880069036Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 17 00:23:43.883022 containerd[1976]: time="2025-05-17T00:23:43.882964125Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:43.887359 containerd[1976]: time="2025-05-17T00:23:43.887296039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:43.889028 containerd[1976]: time="2025-05-17T00:23:43.888518031Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 2.227004572s" May 17 00:23:43.889028 containerd[1976]: time="2025-05-17T00:23:43.888584869Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 00:23:43.889470 containerd[1976]: time="2025-05-17T00:23:43.889329724Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:23:45.925595 containerd[1976]: time="2025-05-17T00:23:45.925540044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:45.926493 containerd[1976]: time="2025-05-17T00:23:45.926446862Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 17 00:23:45.927839 containerd[1976]: time="2025-05-17T00:23:45.927795412Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:45.930580 containerd[1976]: time="2025-05-17T00:23:45.930522569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:45.931472 containerd[1976]: time="2025-05-17T00:23:45.931350757Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 2.041824124s" May 17 00:23:45.931472 containerd[1976]: time="2025-05-17T00:23:45.931383147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 00:23:45.932062 containerd[1976]: time="2025-05-17T00:23:45.931901265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:23:47.572224 containerd[1976]: time="2025-05-17T00:23:47.572169725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:47.573463 containerd[1976]: time="2025-05-17T00:23:47.573415635Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 17 00:23:47.576292 containerd[1976]: time="2025-05-17T00:23:47.574405174Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:47.577550 containerd[1976]: time="2025-05-17T00:23:47.577511698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:47.578462 containerd[1976]: time="2025-05-17T00:23:47.578430686Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.646499736s" May 17 00:23:47.578549 containerd[1976]: time="2025-05-17T00:23:47.578465521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 00:23:47.579441 containerd[1976]: time="2025-05-17T00:23:47.579416230Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:23:47.751581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:23:47.756745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:47.966371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:47.971189 (kubelet)[2528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:23:48.014104 kubelet[2528]: E0517 00:23:48.014026 2528 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:23:48.017824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:23:48.017976 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:48.739496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1615438154.mount: Deactivated successfully. May 17 00:23:49.337844 containerd[1976]: time="2025-05-17T00:23:49.337798293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:49.339955 containerd[1976]: time="2025-05-17T00:23:49.339862714Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 17 00:23:49.342549 containerd[1976]: time="2025-05-17T00:23:49.342306834Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:49.345813 containerd[1976]: time="2025-05-17T00:23:49.345761967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:49.346624 containerd[1976]: time="2025-05-17T00:23:49.346437484Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.766985896s" May 17 00:23:49.346624 containerd[1976]: time="2025-05-17T00:23:49.346474506Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:23:49.347240 containerd[1976]: time="2025-05-17T00:23:49.347203901Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:23:49.910275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2854934175.mount: Deactivated successfully. May 17 00:23:50.845021 containerd[1976]: time="2025-05-17T00:23:50.844966411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:50.847851 containerd[1976]: time="2025-05-17T00:23:50.847794453Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:23:50.851144 containerd[1976]: time="2025-05-17T00:23:50.851058736Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:50.857935 containerd[1976]: time="2025-05-17T00:23:50.857879062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:50.859055 containerd[1976]: time="2025-05-17T00:23:50.858934169Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.511699827s" May 17 00:23:50.859055 containerd[1976]: time="2025-05-17T00:23:50.858966069Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:23:50.859789 containerd[1976]: time="2025-05-17T00:23:50.859600548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:23:51.374039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559798813.mount: Deactivated successfully. May 17 00:23:51.385661 containerd[1976]: time="2025-05-17T00:23:51.385609213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:51.387663 containerd[1976]: time="2025-05-17T00:23:51.387610465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:23:51.389894 containerd[1976]: time="2025-05-17T00:23:51.389840859Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:51.393420 containerd[1976]: time="2025-05-17T00:23:51.393353581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:51.394117 containerd[1976]: time="2025-05-17T00:23:51.393962895Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 534.332853ms" May 17 00:23:51.394117 containerd[1976]: time="2025-05-17T00:23:51.394000147Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:23:51.394746 containerd[1976]: time="2025-05-17T00:23:51.394576050Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:23:51.961367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1784641115.mount: Deactivated successfully. May 17 00:23:54.086928 containerd[1976]: time="2025-05-17T00:23:54.086863935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:54.089694 containerd[1976]: time="2025-05-17T00:23:54.089637580Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 17 00:23:54.093885 containerd[1976]: time="2025-05-17T00:23:54.093820226Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:54.096771 containerd[1976]: time="2025-05-17T00:23:54.096724894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:54.098118 containerd[1976]: time="2025-05-17T00:23:54.097812995Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.703211976s" May 17 00:23:54.098118 containerd[1976]: time="2025-05-17T00:23:54.097851995Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 00:23:56.864412 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:56.870868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:56.903362 systemd[1]: Reloading requested from client PID 2681 ('systemctl') (unit session-7.scope)... May 17 00:23:56.903380 systemd[1]: Reloading... May 17 00:23:57.017561 zram_generator::config[2719]: No configuration found. May 17 00:23:57.182716 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:57.275755 systemd[1]: Reloading finished in 371 ms. May 17 00:23:57.331822 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:57.335037 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:23:57.335286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:57.340829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:57.543716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:57.549852 (kubelet)[2787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:23:57.601577 kubelet[2787]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:57.601925 kubelet[2787]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:23:57.601925 kubelet[2787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:57.602038 kubelet[2787]: I0517 00:23:57.602000 2787 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:23:57.903983 kubelet[2787]: I0517 00:23:57.903658 2787 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:23:57.903983 kubelet[2787]: I0517 00:23:57.903692 2787 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:23:57.903983 kubelet[2787]: I0517 00:23:57.903957 2787 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:23:57.944875 kubelet[2787]: E0517 00:23:57.944828 2787 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.208:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.208:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:57.945951 kubelet[2787]: I0517 00:23:57.945796 2787 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:23:57.974166 kubelet[2787]: E0517 00:23:57.974113 2787 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:23:57.974166 kubelet[2787]: I0517 00:23:57.974162 2787 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:23:57.981329 kubelet[2787]: I0517 00:23:57.981291 2787 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:23:57.983926 kubelet[2787]: I0517 00:23:57.983864 2787 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:23:57.984095 kubelet[2787]: I0517 00:23:57.983920 2787 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-208","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:23:57.986247 kubelet[2787]: I0517 00:23:57.986200 2787 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:23:57.986247 kubelet[2787]: I0517 00:23:57.986230 2787 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:23:57.987955 kubelet[2787]: I0517 00:23:57.987922 2787 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:57.993747 kubelet[2787]: I0517 00:23:57.993443 2787 kubelet.go:446] "Attempting to sync node with API server" May 17 00:23:57.993747 kubelet[2787]: I0517 00:23:57.993488 2787 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:23:57.993747 kubelet[2787]: I0517 00:23:57.993513 2787 kubelet.go:352] "Adding apiserver pod source" May 17 00:23:57.993747 kubelet[2787]: I0517 00:23:57.993524 2787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:23:57.998761 kubelet[2787]: W0517 00:23:57.997845 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-208&limit=500&resourceVersion=0": dial tcp 172.31.18.208:6443: connect: connection refused May 17 00:23:57.998761 kubelet[2787]: E0517 00:23:57.998669 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-208&limit=500&resourceVersion=0\": dial tcp 172.31.18.208:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:57.998896 kubelet[2787]: I0517 00:23:57.998786 2787 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:23:58.005335 kubelet[2787]: I0517 00:23:58.004761 2787 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:23:58.005335 kubelet[2787]: W0517 00:23:58.004832 2787 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:23:58.011481 kubelet[2787]: I0517 00:23:58.011447 2787 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:23:58.011481 kubelet[2787]: I0517 00:23:58.011487 2787 server.go:1287] "Started kubelet" May 17 00:23:58.013668 kubelet[2787]: W0517 00:23:58.013060 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.208:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.208:6443: connect: connection refused May 17 00:23:58.013668 kubelet[2787]: E0517 00:23:58.013113 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.208:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.208:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:58.013668 kubelet[2787]: I0517 00:23:58.013153 2787 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:23:58.019334 kubelet[2787]: I0517 00:23:58.019301 2787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:23:58.020924 kubelet[2787]: I0517 00:23:58.020598 2787 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:23:58.020924 kubelet[2787]: I0517 00:23:58.020864 2787 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:23:58.028273 kubelet[2787]: E0517 00:23:58.023046 2787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.208:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.208:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-208.184028bac278a8ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-208,UID:ip-172-31-18-208,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-208,},FirstTimestamp:2025-05-17 00:23:58.011467962 +0000 UTC m=+0.458212883,LastTimestamp:2025-05-17 00:23:58.011467962 +0000 UTC m=+0.458212883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-208,}" May 17 00:23:58.031258 kubelet[2787]: I0517 00:23:58.028659 2787 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:23:58.031258 kubelet[2787]: I0517 00:23:58.028797 2787 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:23:58.031258 kubelet[2787]: I0517 00:23:58.030570 2787 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:23:58.031258 kubelet[2787]: I0517 00:23:58.030615 2787 reconciler.go:26] "Reconciler: start to sync state" May 17 00:23:58.031258 kubelet[2787]: I0517 00:23:58.030673 2787 server.go:479] "Adding debug handlers to kubelet server" May 17 00:23:58.031258 kubelet[2787]: W0517 00:23:58.031167 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.208:6443: connect: connection refused May 17 00:23:58.031258 kubelet[2787]: E0517 00:23:58.031207 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.208:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:58.031824 kubelet[2787]: E0517 00:23:58.031806 2787 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-208\" not found" May 17 00:23:58.031962 kubelet[2787]: E0517 00:23:58.031946 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-208?timeout=10s\": dial tcp 172.31.18.208:6443: connect: connection refused" interval="200ms" May 17 00:23:58.039025 kubelet[2787]: I0517 00:23:58.039003 2787 factory.go:221] Registration of the systemd container factory successfully May 17 00:23:58.039449 kubelet[2787]: I0517 00:23:58.039430 2787 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:23:58.044400 kubelet[2787]: E0517 00:23:58.043557 2787 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:23:58.046240 kubelet[2787]: I0517 00:23:58.044611 2787 factory.go:221] Registration of the containerd container factory successfully May 17 00:23:58.046382 kubelet[2787]: I0517 00:23:58.046360 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:23:58.047522 kubelet[2787]: I0517 00:23:58.047503 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:23:58.051502 kubelet[2787]: I0517 00:23:58.047632 2787 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:23:58.051502 kubelet[2787]: I0517 00:23:58.047652 2787 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:23:58.051502 kubelet[2787]: I0517 00:23:58.047658 2787 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:23:58.051502 kubelet[2787]: E0517 00:23:58.047698 2787 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:23:58.057670 kubelet[2787]: W0517 00:23:58.057015 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.208:6443: connect: connection refused May 17 00:23:58.057670 kubelet[2787]: E0517 00:23:58.057085 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.208:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:58.081904 kubelet[2787]: I0517 00:23:58.081846 2787 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:23:58.081904 kubelet[2787]: I0517 00:23:58.081866 2787 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:23:58.081904 kubelet[2787]: I0517 00:23:58.081881 2787 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:58.086574 kubelet[2787]: I0517 00:23:58.086524 2787 policy_none.go:49] "None policy: Start" May 17 00:23:58.086574 kubelet[2787]: I0517 00:23:58.086572 2787 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:23:58.086574 kubelet[2787]: I0517 00:23:58.086586 2787 state_mem.go:35] "Initializing new in-memory state store" May 17 00:23:58.093686 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:23:58.107570 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:23:58.110461 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:23:58.118390 kubelet[2787]: I0517 00:23:58.118367 2787 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:23:58.118711 kubelet[2787]: I0517 00:23:58.118562 2787 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:23:58.118711 kubelet[2787]: I0517 00:23:58.118575 2787 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:23:58.120726 kubelet[2787]: E0517 00:23:58.120703 2787 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:23:58.121211 kubelet[2787]: E0517 00:23:58.120753 2787 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-208\" not found" May 17 00:23:58.122066 kubelet[2787]: I0517 00:23:58.122049 2787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:23:58.161345 systemd[1]: Created slice kubepods-burstable-podfe6cf74b73a0fc8b8f845e162a7e31fa.slice - libcontainer container kubepods-burstable-podfe6cf74b73a0fc8b8f845e162a7e31fa.slice. May 17 00:23:58.174833 kubelet[2787]: E0517 00:23:58.174568 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-208\" not found" node="ip-172-31-18-208" May 17 00:23:58.177011 systemd[1]: Created slice kubepods-burstable-pod1fda9552e203700b5ed577cfd038aac5.slice - libcontainer container kubepods-burstable-pod1fda9552e203700b5ed577cfd038aac5.slice. May 17 00:23:58.185386 kubelet[2787]: E0517 00:23:58.185104 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-208\" not found" node="ip-172-31-18-208" May 17 00:23:58.188194 systemd[1]: Created slice kubepods-burstable-podaee414251751de0b62564b4870e5ef8f.slice - libcontainer container kubepods-burstable-podaee414251751de0b62564b4870e5ef8f.slice. May 17 00:23:58.190220 kubelet[2787]: E0517 00:23:58.190192 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-208\" not found" node="ip-172-31-18-208" May 17 00:23:58.220702 kubelet[2787]: I0517 00:23:58.220665 2787 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-208" May 17 00:23:58.221013 kubelet[2787]: E0517 00:23:58.220991 2787 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.208:6443/api/v1/nodes\": dial tcp 172.31.18.208:6443: connect: connection refused" node="ip-172-31-18-208" May 17 00:23:58.232410 kubelet[2787]: I0517 00:23:58.232376 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe6cf74b73a0fc8b8f845e162a7e31fa-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-208\" (UID: \"fe6cf74b73a0fc8b8f845e162a7e31fa\") " pod="kube-system/kube-scheduler-ip-172-31-18-208" May 17 00:23:58.232695 kubelet[2787]: I0517 00:23:58.232586 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fda9552e203700b5ed577cfd038aac5-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-208\" (UID: \"1fda9552e203700b5ed577cfd038aac5\") " pod="kube-system/kube-apiserver-ip-172-31-18-208" May 17 00:23:58.232695 kubelet[2787]: I0517 00:23:58.232610 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fda9552e203700b5ed577cfd038aac5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-208\" (UID: \"1fda9552e203700b5ed577cfd038aac5\") " pod="kube-system/kube-apiserver-ip-172-31-18-208" May 17 00:23:58.232695 kubelet[2787]: E0517 00:23:58.232631 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-208?timeout=10s\": dial tcp 172.31.18.208:6443: connect: connection refused" interval="400ms" May 17 00:23:58.232695 kubelet[2787]: I0517 00:23:58.232643 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aee414251751de0b62564b4870e5ef8f-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-208\" (UID: \"aee414251751de0b62564b4870e5ef8f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:23:58.232695 kubelet[2787]: I0517 00:23:58.232678 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aee414251751de0b62564b4870e5ef8f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-208\" (UID: \"aee414251751de0b62564b4870e5ef8f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:23:58.232868 kubelet[2787]: I0517 00:23:58.232699 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fda9552e203700b5ed577cfd038aac5-ca-certs\") pod \"kube-apiserver-ip-172-31-18-208\" (UID: \"1fda9552e203700b5ed577cfd038aac5\") " pod="kube-system/kube-apiserver-ip-172-31-18-208" May 17 00:23:58.232868 kubelet[2787]: I0517 00:23:58.232715 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aee414251751de0b62564b4870e5ef8f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-208\" (UID: \"aee414251751de0b62564b4870e5ef8f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:23:58.232868 kubelet[2787]: I0517 00:23:58.232731 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aee414251751de0b62564b4870e5ef8f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-208\" (UID: \"aee414251751de0b62564b4870e5ef8f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:23:58.232868 kubelet[2787]: I0517 00:23:58.232747 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aee414251751de0b62564b4870e5ef8f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-208\" (UID: \"aee414251751de0b62564b4870e5ef8f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:23:58.422974 kubelet[2787]: I0517 00:23:58.422830 2787 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-208" May 17 00:23:58.423197 kubelet[2787]: E0517 00:23:58.423147 2787 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.208:6443/api/v1/nodes\": dial tcp 172.31.18.208:6443: connect: connection refused" node="ip-172-31-18-208" May 17 00:23:58.475842 containerd[1976]: time="2025-05-17T00:23:58.475799973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-208,Uid:fe6cf74b73a0fc8b8f845e162a7e31fa,Namespace:kube-system,Attempt:0,}" May 17 00:23:58.491082 containerd[1976]: time="2025-05-17T00:23:58.491037545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-208,Uid:1fda9552e203700b5ed577cfd038aac5,Namespace:kube-system,Attempt:0,}" May 17 00:23:58.491375 containerd[1976]: time="2025-05-17T00:23:58.491341282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-208,Uid:aee414251751de0b62564b4870e5ef8f,Namespace:kube-system,Attempt:0,}" May 17 00:23:58.633549 kubelet[2787]: E0517 00:23:58.633491 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-208?timeout=10s\": dial tcp 172.31.18.208:6443: connect: connection refused" interval="800ms" May 17 00:23:58.825712 kubelet[2787]: I0517 00:23:58.825675 2787 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-208" May 17 00:23:58.826030 kubelet[2787]: E0517 00:23:58.825988 2787 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.208:6443/api/v1/nodes\": dial tcp 172.31.18.208:6443: connect: connection refused" node="ip-172-31-18-208" May 17 00:23:59.006510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217185645.mount: Deactivated successfully. May 17 00:23:59.015867 containerd[1976]: time="2025-05-17T00:23:59.015818878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:59.023197 containerd[1976]: time="2025-05-17T00:23:59.023097864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:23:59.024745 containerd[1976]: time="2025-05-17T00:23:59.024695512Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:59.026284 containerd[1976]: time="2025-05-17T00:23:59.026231258Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:59.028035 containerd[1976]: time="2025-05-17T00:23:59.027956132Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:59.029518 containerd[1976]: time="2025-05-17T00:23:59.029474438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:23:59.031408 containerd[1976]: time="2025-05-17T00:23:59.031267352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:23:59.033974 containerd[1976]: time="2025-05-17T00:23:59.033944364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:59.034821 containerd[1976]: time="2025-05-17T00:23:59.034786793Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 558.911046ms" May 17 00:23:59.037789 containerd[1976]: time="2025-05-17T00:23:59.037744722Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.62737ms" May 17 00:23:59.040638 containerd[1976]: time="2025-05-17T00:23:59.040605261Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.203721ms" May 17 00:23:59.044386 kubelet[2787]: W0517 00:23:59.044327 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.208:6443: connect: connection refused May 17 00:23:59.044494 kubelet[2787]: E0517 00:23:59.044397 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.208:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:59.144558 kubelet[2787]: W0517 00:23:59.144412 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-208&limit=500&resourceVersion=0": dial tcp 172.31.18.208:6443: connect: connection refused May 17 00:23:59.144558 kubelet[2787]: E0517 00:23:59.144479 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-208&limit=500&resourceVersion=0\": dial tcp 172.31.18.208:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:59.294556 containerd[1976]: time="2025-05-17T00:23:59.294404103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:59.294556 containerd[1976]: time="2025-05-17T00:23:59.294454573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:59.294556 containerd[1976]: time="2025-05-17T00:23:59.294468900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:59.294905 containerd[1976]: time="2025-05-17T00:23:59.294643492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:59.296425 containerd[1976]: time="2025-05-17T00:23:59.295705607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:59.296425 containerd[1976]: time="2025-05-17T00:23:59.295750087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:59.296425 containerd[1976]: time="2025-05-17T00:23:59.295764597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:59.296425 containerd[1976]: time="2025-05-17T00:23:59.295827319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:59.300551 containerd[1976]: time="2025-05-17T00:23:59.298846582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:59.300551 containerd[1976]: time="2025-05-17T00:23:59.298883238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:59.300551 containerd[1976]: time="2025-05-17T00:23:59.298906562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:59.300551 containerd[1976]: time="2025-05-17T00:23:59.298970833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:59.323732 systemd[1]: Started cri-containerd-d1f9d683bcced7b53fecf9deea02b7d1beccf4fc3c1cb22d3039be6db3bb6ef4.scope - libcontainer container d1f9d683bcced7b53fecf9deea02b7d1beccf4fc3c1cb22d3039be6db3bb6ef4. May 17 00:23:59.330874 systemd[1]: Started cri-containerd-92ba8af35bb4bcf3c2a64cea2e4aae6ca6dfeead85ad3aadc5b44bfef4da0f0b.scope - libcontainer container 92ba8af35bb4bcf3c2a64cea2e4aae6ca6dfeead85ad3aadc5b44bfef4da0f0b. May 17 00:23:59.333161 systemd[1]: Started cri-containerd-e6be6bdf70990fdf94b39d10e0d5023d4cb249e1c8ddc1cd4d33987a2f0426d3.scope - libcontainer container e6be6bdf70990fdf94b39d10e0d5023d4cb249e1c8ddc1cd4d33987a2f0426d3. May 17 00:23:59.388722 containerd[1976]: time="2025-05-17T00:23:59.388677105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-208,Uid:fe6cf74b73a0fc8b8f845e162a7e31fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1f9d683bcced7b53fecf9deea02b7d1beccf4fc3c1cb22d3039be6db3bb6ef4\"" May 17 00:23:59.395732 containerd[1976]: time="2025-05-17T00:23:59.395632946Z" level=info msg="CreateContainer within sandbox \"d1f9d683bcced7b53fecf9deea02b7d1beccf4fc3c1cb22d3039be6db3bb6ef4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:23:59.406525 containerd[1976]: time="2025-05-17T00:23:59.406492479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-208,Uid:aee414251751de0b62564b4870e5ef8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"92ba8af35bb4bcf3c2a64cea2e4aae6ca6dfeead85ad3aadc5b44bfef4da0f0b\"" May 17 00:23:59.409756 containerd[1976]: time="2025-05-17T00:23:59.409728838Z" level=info msg="CreateContainer within sandbox \"92ba8af35bb4bcf3c2a64cea2e4aae6ca6dfeead85ad3aadc5b44bfef4da0f0b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:23:59.413119 containerd[1976]: time="2025-05-17T00:23:59.413088923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-208,Uid:1fda9552e203700b5ed577cfd038aac5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6be6bdf70990fdf94b39d10e0d5023d4cb249e1c8ddc1cd4d33987a2f0426d3\"" May 17 00:23:59.416014 containerd[1976]: time="2025-05-17T00:23:59.415981316Z" level=info msg="CreateContainer within sandbox \"e6be6bdf70990fdf94b39d10e0d5023d4cb249e1c8ddc1cd4d33987a2f0426d3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:23:59.433986 kubelet[2787]: E0517 00:23:59.433948 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-208?timeout=10s\": dial tcp 172.31.18.208:6443: connect: connection refused" interval="1.6s" May 17 00:23:59.445922 containerd[1976]: time="2025-05-17T00:23:59.445878655Z" level=info msg="CreateContainer within sandbox \"d1f9d683bcced7b53fecf9deea02b7d1beccf4fc3c1cb22d3039be6db3bb6ef4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b34d489b8b90c7b8b8dce80b063c5013d2c6e5d9272698b8b712782ade38b48\"" May 17 00:23:59.446507 containerd[1976]: time="2025-05-17T00:23:59.446480683Z" level=info msg="StartContainer for \"6b34d489b8b90c7b8b8dce80b063c5013d2c6e5d9272698b8b712782ade38b48\"" May 17 00:23:59.456465 containerd[1976]: time="2025-05-17T00:23:59.456417656Z" level=info msg="CreateContainer within sandbox \"e6be6bdf70990fdf94b39d10e0d5023d4cb249e1c8ddc1cd4d33987a2f0426d3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c35922ea9007ac496215bdc0695279d0acdfce0a92cfa4fdfa44499554c9ef76\"" May 17 00:23:59.457136 containerd[1976]: time="2025-05-17T00:23:59.457108699Z" level=info msg="CreateContainer within sandbox \"92ba8af35bb4bcf3c2a64cea2e4aae6ca6dfeead85ad3aadc5b44bfef4da0f0b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a1e35264acc7685a0f2728d122927ebc61edeac63efd2dabe9bebc3d91a006d8\"" May 17 00:23:59.457841 containerd[1976]: time="2025-05-17T00:23:59.457634337Z" level=info msg="StartContainer for \"c35922ea9007ac496215bdc0695279d0acdfce0a92cfa4fdfa44499554c9ef76\"" May 17 00:23:59.457841 containerd[1976]: time="2025-05-17T00:23:59.457771936Z" level=info msg="StartContainer for \"a1e35264acc7685a0f2728d122927ebc61edeac63efd2dabe9bebc3d91a006d8\"" May 17 00:23:59.471859 kubelet[2787]: W0517 00:23:59.471609 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.208:6443: connect: connection refused May 17 00:23:59.471859 kubelet[2787]: E0517 00:23:59.471651 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.208:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:59.476587 systemd[1]: Started cri-containerd-6b34d489b8b90c7b8b8dce80b063c5013d2c6e5d9272698b8b712782ade38b48.scope - libcontainer container 6b34d489b8b90c7b8b8dce80b063c5013d2c6e5d9272698b8b712782ade38b48. May 17 00:23:59.496685 systemd[1]: Started cri-containerd-c35922ea9007ac496215bdc0695279d0acdfce0a92cfa4fdfa44499554c9ef76.scope - libcontainer container c35922ea9007ac496215bdc0695279d0acdfce0a92cfa4fdfa44499554c9ef76. May 17 00:23:59.512709 systemd[1]: Started cri-containerd-a1e35264acc7685a0f2728d122927ebc61edeac63efd2dabe9bebc3d91a006d8.scope - libcontainer container a1e35264acc7685a0f2728d122927ebc61edeac63efd2dabe9bebc3d91a006d8. May 17 00:23:59.547603 kubelet[2787]: W0517 00:23:59.547462 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.208:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.208:6443: connect: connection refused May 17 00:23:59.547603 kubelet[2787]: E0517 00:23:59.547520 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.208:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.208:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:59.552710 containerd[1976]: time="2025-05-17T00:23:59.550687783Z" level=info msg="StartContainer for \"6b34d489b8b90c7b8b8dce80b063c5013d2c6e5d9272698b8b712782ade38b48\" returns successfully" May 17 00:23:59.576173 containerd[1976]: time="2025-05-17T00:23:59.576115797Z" level=info msg="StartContainer for \"c35922ea9007ac496215bdc0695279d0acdfce0a92cfa4fdfa44499554c9ef76\" returns successfully" May 17 00:23:59.604809 containerd[1976]: time="2025-05-17T00:23:59.604771843Z" level=info msg="StartContainer for \"a1e35264acc7685a0f2728d122927ebc61edeac63efd2dabe9bebc3d91a006d8\" returns successfully" May 17 00:23:59.628591 kubelet[2787]: I0517 00:23:59.628145 2787 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-208" May 17 00:23:59.628591 kubelet[2787]: E0517 00:23:59.628516 2787 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.208:6443/api/v1/nodes\": dial tcp 172.31.18.208:6443: connect: connection refused" node="ip-172-31-18-208" May 17 00:24:00.069953 kubelet[2787]: E0517 00:24:00.069905 2787 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.208:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.208:6443: connect: connection refused" logger="UnhandledError" May 17 00:24:00.088294 kubelet[2787]: E0517 00:24:00.087228 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-208\" not found" node="ip-172-31-18-208" May 17 00:24:00.091374 kubelet[2787]: E0517 00:24:00.091084 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-208\" not found" node="ip-172-31-18-208" May 17 00:24:00.093881 kubelet[2787]: E0517 00:24:00.093628 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-208\" not found" node="ip-172-31-18-208" May 17 00:24:00.371187 kubelet[2787]: E0517 00:24:00.370987 2787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.208:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.208:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-208.184028bac278a8ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-208,UID:ip-172-31-18-208,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-208,},FirstTimestamp:2025-05-17 00:23:58.011467962 +0000 UTC m=+0.458212883,LastTimestamp:2025-05-17 00:23:58.011467962 +0000 UTC m=+0.458212883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-208,}" May 17 00:24:01.035011 kubelet[2787]: E0517 00:24:01.034958 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-208?timeout=10s\": dial tcp 172.31.18.208:6443: connect: connection refused" interval="3.2s" May 17 00:24:01.099052 kubelet[2787]: E0517 00:24:01.099015 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-208\" not found" node="ip-172-31-18-208" May 17 00:24:01.100476 kubelet[2787]: E0517 00:24:01.100444 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-208\" not found" node="ip-172-31-18-208" May 17 00:24:01.230783 kubelet[2787]: I0517 00:24:01.230753 2787 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-208" May 17 00:24:02.429159 kubelet[2787]: E0517 00:24:02.429121 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-208\" not found" node="ip-172-31-18-208" May 17 00:24:02.864023 kubelet[2787]: E0517 00:24:02.863848 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-208\" not found" node="ip-172-31-18-208" May 17 00:24:02.943583 kubelet[2787]: I0517 00:24:02.943524 2787 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-208" May 17 00:24:03.009191 kubelet[2787]: I0517 00:24:03.009142 2787 apiserver.go:52] "Watching apiserver" May 17 00:24:03.031636 kubelet[2787]: I0517 00:24:03.031568 2787 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:24:03.032677 kubelet[2787]: I0517 00:24:03.032644 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-208" May 17 00:24:03.037696 kubelet[2787]: E0517 00:24:03.037665 2787 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-208\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-208" May 17 00:24:03.037696 kubelet[2787]: I0517 00:24:03.037692 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-208" May 17 00:24:03.039332 kubelet[2787]: E0517 00:24:03.039304 2787 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-208\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-208" May 17 00:24:03.039332 kubelet[2787]: I0517 00:24:03.039328 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:24:03.041162 kubelet[2787]: E0517 00:24:03.041107 2787 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-208\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:24:03.532394 kubelet[2787]: I0517 00:24:03.532353 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-208" May 17 00:24:03.534558 kubelet[2787]: E0517 00:24:03.534510 2787 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-208\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-208" May 17 00:24:03.812841 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:24:05.009964 systemd[1]: Reloading requested from client PID 3061 ('systemctl') (unit session-7.scope)... May 17 00:24:05.009981 systemd[1]: Reloading... May 17 00:24:05.097561 zram_generator::config[3104]: No configuration found. May 17 00:24:05.216326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:24:05.313954 systemd[1]: Reloading finished in 303 ms. May 17 00:24:05.356218 kubelet[2787]: I0517 00:24:05.355900 2787 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:24:05.355960 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:24:05.371255 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:24:05.371450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:24:05.377921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:24:05.639409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:24:05.649914 (kubelet)[3161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:24:05.713385 kubelet[3161]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:24:05.713385 kubelet[3161]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:24:05.713385 kubelet[3161]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:24:05.713776 kubelet[3161]: I0517 00:24:05.713445 3161 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:24:05.720391 kubelet[3161]: I0517 00:24:05.720346 3161 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:24:05.720391 kubelet[3161]: I0517 00:24:05.720373 3161 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:24:05.720630 kubelet[3161]: I0517 00:24:05.720614 3161 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:24:05.723565 kubelet[3161]: I0517 00:24:05.723507 3161 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:24:05.726996 kubelet[3161]: I0517 00:24:05.726604 3161 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:24:05.731690 kubelet[3161]: E0517 00:24:05.731655 3161 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:24:05.731690 kubelet[3161]: I0517 00:24:05.731685 3161 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:24:05.733807 kubelet[3161]: I0517 00:24:05.733790 3161 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:24:05.734923 kubelet[3161]: I0517 00:24:05.734885 3161 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:24:05.735083 kubelet[3161]: I0517 00:24:05.734923 3161 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-208","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:24:05.735174 kubelet[3161]: I0517 00:24:05.735090 3161 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:24:05.735174 kubelet[3161]: I0517 00:24:05.735101 3161 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:24:05.738612 kubelet[3161]: I0517 00:24:05.738515 3161 state_mem.go:36] "Initialized new in-memory state store" May 17 00:24:05.738911 kubelet[3161]: I0517 00:24:05.738888 3161 kubelet.go:446] "Attempting to sync node with API server" May 17 00:24:05.739392 kubelet[3161]: I0517 00:24:05.739274 3161 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:24:05.739392 kubelet[3161]: I0517 00:24:05.739318 3161 kubelet.go:352] "Adding apiserver pod source" May 17 00:24:05.739392 kubelet[3161]: I0517 00:24:05.739333 3161 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:24:05.741351 kubelet[3161]: I0517 00:24:05.741248 3161 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:24:05.745167 kubelet[3161]: I0517 00:24:05.744780 3161 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:24:05.749657 kubelet[3161]: I0517 00:24:05.749637 3161 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:24:05.749778 kubelet[3161]: I0517 00:24:05.749771 3161 server.go:1287] "Started kubelet" May 17 00:24:05.756885 kubelet[3161]: I0517 00:24:05.756859 3161 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:24:05.769480 kubelet[3161]: I0517 00:24:05.768651 3161 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:24:05.769480 kubelet[3161]: I0517 00:24:05.769454 3161 server.go:479] "Adding debug handlers to kubelet server" May 17 00:24:05.770348 kubelet[3161]: I0517 00:24:05.770297 3161 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:24:05.771231 kubelet[3161]: I0517 00:24:05.770475 3161 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:24:05.771231 kubelet[3161]: I0517 00:24:05.770658 3161 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:24:05.771903 kubelet[3161]: I0517 00:24:05.771887 3161 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:24:05.772803 kubelet[3161]: I0517 00:24:05.772783 3161 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:24:05.772898 kubelet[3161]: I0517 00:24:05.772886 3161 reconciler.go:26] "Reconciler: start to sync state" May 17 00:24:05.774185 kubelet[3161]: I0517 00:24:05.774162 3161 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:24:05.775213 kubelet[3161]: I0517 00:24:05.775197 3161 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:24:05.775306 kubelet[3161]: I0517 00:24:05.775299 3161 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:24:05.775360 kubelet[3161]: I0517 00:24:05.775353 3161 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:24:05.775403 kubelet[3161]: I0517 00:24:05.775398 3161 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:24:05.775488 kubelet[3161]: E0517 00:24:05.775476 3161 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:24:05.775906 kubelet[3161]: I0517 00:24:05.775889 3161 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:24:05.777839 kubelet[3161]: E0517 00:24:05.777823 3161 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:24:05.786987 kubelet[3161]: I0517 00:24:05.786950 3161 factory.go:221] Registration of the containerd container factory successfully May 17 00:24:05.786987 kubelet[3161]: I0517 00:24:05.786969 3161 factory.go:221] Registration of the systemd container factory successfully May 17 00:24:05.828506 kubelet[3161]: I0517 00:24:05.828451 3161 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:24:05.829508 kubelet[3161]: I0517 00:24:05.828658 3161 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:24:05.829508 kubelet[3161]: I0517 00:24:05.828678 3161 state_mem.go:36] "Initialized new in-memory state store" May 17 00:24:05.829508 kubelet[3161]: I0517 00:24:05.828819 3161 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:24:05.829508 kubelet[3161]: I0517 00:24:05.828828 3161 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:24:05.829508 kubelet[3161]: I0517 00:24:05.828844 3161 policy_none.go:49] "None policy: Start" May 17 00:24:05.829508 kubelet[3161]: I0517 00:24:05.828853 3161 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:24:05.829508 kubelet[3161]: I0517 00:24:05.828861 3161 state_mem.go:35] "Initializing new in-memory state store" May 17 00:24:05.829508 kubelet[3161]: I0517 00:24:05.828951 3161 state_mem.go:75] "Updated machine memory state" May 17 00:24:05.832808 kubelet[3161]: I0517 00:24:05.832792 3161 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:24:05.833002 kubelet[3161]: I0517 00:24:05.832992 3161 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:24:05.833082 kubelet[3161]: I0517 00:24:05.833058 3161 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:24:05.833648 kubelet[3161]: I0517 00:24:05.833634 3161 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:24:05.836012 kubelet[3161]: E0517 00:24:05.835990 3161 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:24:05.876401 kubelet[3161]: I0517 00:24:05.876371 3161 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-208" May 17 00:24:05.878227 kubelet[3161]: I0517 00:24:05.878004 3161 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:24:05.878227 kubelet[3161]: I0517 00:24:05.878048 3161 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-208" May 17 00:24:05.940371 kubelet[3161]: I0517 00:24:05.939036 3161 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-208" May 17 00:24:05.945037 kubelet[3161]: I0517 00:24:05.945013 3161 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-208" May 17 00:24:05.945170 kubelet[3161]: I0517 00:24:05.945082 3161 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-208" May 17 00:24:06.077544 kubelet[3161]: I0517 00:24:06.077282 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aee414251751de0b62564b4870e5ef8f-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-208\" (UID: \"aee414251751de0b62564b4870e5ef8f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:24:06.077544 kubelet[3161]: I0517 00:24:06.077330 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aee414251751de0b62564b4870e5ef8f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-208\" (UID: \"aee414251751de0b62564b4870e5ef8f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:24:06.077544 kubelet[3161]: I0517 00:24:06.077357 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe6cf74b73a0fc8b8f845e162a7e31fa-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-208\" (UID: \"fe6cf74b73a0fc8b8f845e162a7e31fa\") " pod="kube-system/kube-scheduler-ip-172-31-18-208" May 17 00:24:06.077544 kubelet[3161]: I0517 00:24:06.077373 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fda9552e203700b5ed577cfd038aac5-ca-certs\") pod \"kube-apiserver-ip-172-31-18-208\" (UID: \"1fda9552e203700b5ed577cfd038aac5\") " pod="kube-system/kube-apiserver-ip-172-31-18-208" May 17 00:24:06.077544 kubelet[3161]: I0517 00:24:06.077388 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aee414251751de0b62564b4870e5ef8f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-208\" (UID: \"aee414251751de0b62564b4870e5ef8f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:24:06.077776 kubelet[3161]: I0517 00:24:06.077402 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aee414251751de0b62564b4870e5ef8f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-208\" (UID: \"aee414251751de0b62564b4870e5ef8f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:24:06.077776 kubelet[3161]: I0517 00:24:06.077417 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aee414251751de0b62564b4870e5ef8f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-208\" (UID: \"aee414251751de0b62564b4870e5ef8f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-208" May 17 00:24:06.077776 kubelet[3161]: I0517 00:24:06.077432 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fda9552e203700b5ed577cfd038aac5-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-208\" (UID: \"1fda9552e203700b5ed577cfd038aac5\") " pod="kube-system/kube-apiserver-ip-172-31-18-208" May 17 00:24:06.077776 kubelet[3161]: I0517 00:24:06.077451 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fda9552e203700b5ed577cfd038aac5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-208\" (UID: \"1fda9552e203700b5ed577cfd038aac5\") " pod="kube-system/kube-apiserver-ip-172-31-18-208" May 17 00:24:06.741614 kubelet[3161]: I0517 00:24:06.741568 3161 apiserver.go:52] "Watching apiserver" May 17 00:24:06.773104 kubelet[3161]: I0517 00:24:06.773062 3161 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:24:06.810996 kubelet[3161]: I0517 00:24:06.810970 3161 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-208" May 17 00:24:06.815961 kubelet[3161]: E0517 00:24:06.815749 3161 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-208\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-208" May 17 00:24:06.837401 kubelet[3161]: I0517 00:24:06.837289 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-208" podStartSLOduration=1.8372733 podStartE2EDuration="1.8372733s" podCreationTimestamp="2025-05-17 00:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:06.828639645 +0000 UTC m=+1.171100315" watchObservedRunningTime="2025-05-17 00:24:06.8372733 +0000 UTC m=+1.179733934" May 17 00:24:06.837681 kubelet[3161]: I0517 00:24:06.837654 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-208" podStartSLOduration=1.837645428 podStartE2EDuration="1.837645428s" podCreationTimestamp="2025-05-17 00:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:06.837272114 +0000 UTC m=+1.179732767" watchObservedRunningTime="2025-05-17 00:24:06.837645428 +0000 UTC m=+1.180106080" May 17 00:24:10.887980 kubelet[3161]: I0517 00:24:10.887948 3161 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:24:10.888437 containerd[1976]: time="2025-05-17T00:24:10.888364600Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:24:10.888679 kubelet[3161]: I0517 00:24:10.888551 3161 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:24:11.556799 kubelet[3161]: I0517 00:24:11.556726 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-208" podStartSLOduration=6.556458636 podStartE2EDuration="6.556458636s" podCreationTimestamp="2025-05-17 00:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:06.849161363 +0000 UTC m=+1.191622016" watchObservedRunningTime="2025-05-17 00:24:11.556458636 +0000 UTC m=+5.898919325" May 17 00:24:11.564884 systemd[1]: Created slice kubepods-besteffort-podee1d8685_183b_4b35_b9de_f1e73c57a077.slice - libcontainer container kubepods-besteffort-podee1d8685_183b_4b35_b9de_f1e73c57a077.slice. May 17 00:24:11.608967 kubelet[3161]: I0517 00:24:11.608669 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee1d8685-183b-4b35-b9de-f1e73c57a077-kube-proxy\") pod \"kube-proxy-lzn9s\" (UID: \"ee1d8685-183b-4b35-b9de-f1e73c57a077\") " pod="kube-system/kube-proxy-lzn9s" May 17 00:24:11.608967 kubelet[3161]: I0517 00:24:11.608715 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee1d8685-183b-4b35-b9de-f1e73c57a077-xtables-lock\") pod \"kube-proxy-lzn9s\" (UID: \"ee1d8685-183b-4b35-b9de-f1e73c57a077\") " pod="kube-system/kube-proxy-lzn9s" May 17 00:24:11.608967 kubelet[3161]: I0517 00:24:11.608740 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee1d8685-183b-4b35-b9de-f1e73c57a077-lib-modules\") pod \"kube-proxy-lzn9s\" (UID: \"ee1d8685-183b-4b35-b9de-f1e73c57a077\") " pod="kube-system/kube-proxy-lzn9s" May 17 00:24:11.608967 kubelet[3161]: I0517 00:24:11.608780 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv4hm\" (UniqueName: \"kubernetes.io/projected/ee1d8685-183b-4b35-b9de-f1e73c57a077-kube-api-access-tv4hm\") pod \"kube-proxy-lzn9s\" (UID: \"ee1d8685-183b-4b35-b9de-f1e73c57a077\") " pod="kube-system/kube-proxy-lzn9s" May 17 00:24:11.875756 containerd[1976]: time="2025-05-17T00:24:11.875622144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzn9s,Uid:ee1d8685-183b-4b35-b9de-f1e73c57a077,Namespace:kube-system,Attempt:0,}" May 17 00:24:11.927557 containerd[1976]: time="2025-05-17T00:24:11.926677048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:11.927557 containerd[1976]: time="2025-05-17T00:24:11.926741896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:11.927557 containerd[1976]: time="2025-05-17T00:24:11.926756784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:11.927557 containerd[1976]: time="2025-05-17T00:24:11.926872832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:11.956766 systemd[1]: Started cri-containerd-3f36ff4eb0f5a1a610539aa934c3b03cfd89c63d036fef3c09a7cb7add80ea70.scope - libcontainer container 3f36ff4eb0f5a1a610539aa934c3b03cfd89c63d036fef3c09a7cb7add80ea70. May 17 00:24:12.011413 systemd[1]: Created slice kubepods-besteffort-podac909d2b_6981_4b37_a85b_a5a2163972f1.slice - libcontainer container kubepods-besteffort-podac909d2b_6981_4b37_a85b_a5a2163972f1.slice. May 17 00:24:12.012110 kubelet[3161]: I0517 00:24:12.011967 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx4cd\" (UniqueName: \"kubernetes.io/projected/ac909d2b-6981-4b37-a85b-a5a2163972f1-kube-api-access-rx4cd\") pod \"tigera-operator-844669ff44-bssfg\" (UID: \"ac909d2b-6981-4b37-a85b-a5a2163972f1\") " pod="tigera-operator/tigera-operator-844669ff44-bssfg" May 17 00:24:12.012110 kubelet[3161]: I0517 00:24:12.012019 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ac909d2b-6981-4b37-a85b-a5a2163972f1-var-lib-calico\") pod \"tigera-operator-844669ff44-bssfg\" (UID: \"ac909d2b-6981-4b37-a85b-a5a2163972f1\") " pod="tigera-operator/tigera-operator-844669ff44-bssfg" May 17 00:24:12.017566 containerd[1976]: time="2025-05-17T00:24:12.017405543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzn9s,Uid:ee1d8685-183b-4b35-b9de-f1e73c57a077,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f36ff4eb0f5a1a610539aa934c3b03cfd89c63d036fef3c09a7cb7add80ea70\"" May 17 00:24:12.021184 containerd[1976]: time="2025-05-17T00:24:12.021142029Z" level=info msg="CreateContainer within sandbox \"3f36ff4eb0f5a1a610539aa934c3b03cfd89c63d036fef3c09a7cb7add80ea70\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:24:12.046802 containerd[1976]: time="2025-05-17T00:24:12.046739811Z" level=info msg="CreateContainer within sandbox \"3f36ff4eb0f5a1a610539aa934c3b03cfd89c63d036fef3c09a7cb7add80ea70\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c25a9606a9e6e70f4dac293198e043cf80d54cc8c4105f187e130302306eeb7\"" May 17 00:24:12.047665 containerd[1976]: time="2025-05-17T00:24:12.047268142Z" level=info msg="StartContainer for \"9c25a9606a9e6e70f4dac293198e043cf80d54cc8c4105f187e130302306eeb7\"" May 17 00:24:12.072756 systemd[1]: Started cri-containerd-9c25a9606a9e6e70f4dac293198e043cf80d54cc8c4105f187e130302306eeb7.scope - libcontainer container 9c25a9606a9e6e70f4dac293198e043cf80d54cc8c4105f187e130302306eeb7. May 17 00:24:12.101430 containerd[1976]: time="2025-05-17T00:24:12.101379795Z" level=info msg="StartContainer for \"9c25a9606a9e6e70f4dac293198e043cf80d54cc8c4105f187e130302306eeb7\" returns successfully" May 17 00:24:12.316752 containerd[1976]: time="2025-05-17T00:24:12.316711135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-bssfg,Uid:ac909d2b-6981-4b37-a85b-a5a2163972f1,Namespace:tigera-operator,Attempt:0,}" May 17 00:24:12.341503 containerd[1976]: time="2025-05-17T00:24:12.341423482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:12.341635 containerd[1976]: time="2025-05-17T00:24:12.341513093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:12.342119 containerd[1976]: time="2025-05-17T00:24:12.341615500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:12.342119 containerd[1976]: time="2025-05-17T00:24:12.341975416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:12.360717 systemd[1]: Started cri-containerd-803e8296431489d593801dfbe3f73a20276d3e43e0fb1ab06f4fece81d25dcd8.scope - libcontainer container 803e8296431489d593801dfbe3f73a20276d3e43e0fb1ab06f4fece81d25dcd8. May 17 00:24:12.402025 containerd[1976]: time="2025-05-17T00:24:12.401969020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-bssfg,Uid:ac909d2b-6981-4b37-a85b-a5a2163972f1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"803e8296431489d593801dfbe3f73a20276d3e43e0fb1ab06f4fece81d25dcd8\"" May 17 00:24:12.403719 containerd[1976]: time="2025-05-17T00:24:12.403657929Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:24:12.724078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount709246828.mount: Deactivated successfully. May 17 00:24:13.950378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404694528.mount: Deactivated successfully. May 17 00:24:14.573364 kubelet[3161]: I0517 00:24:14.573293 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lzn9s" podStartSLOduration=3.569751608 podStartE2EDuration="3.569751608s" podCreationTimestamp="2025-05-17 00:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:12.834279973 +0000 UTC m=+7.176740626" watchObservedRunningTime="2025-05-17 00:24:14.569751608 +0000 UTC m=+8.912212259" May 17 00:24:14.951776 containerd[1976]: time="2025-05-17T00:24:14.951500896Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:14.953508 containerd[1976]: time="2025-05-17T00:24:14.953440312Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:24:14.955940 containerd[1976]: time="2025-05-17T00:24:14.955862486Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:14.959149 containerd[1976]: time="2025-05-17T00:24:14.959119268Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:14.960199 containerd[1976]: time="2025-05-17T00:24:14.959682414Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.555960761s" May 17 00:24:14.960199 containerd[1976]: time="2025-05-17T00:24:14.959713999Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:24:14.975125 containerd[1976]: time="2025-05-17T00:24:14.975091233Z" level=info msg="CreateContainer within sandbox \"803e8296431489d593801dfbe3f73a20276d3e43e0fb1ab06f4fece81d25dcd8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:24:14.997268 containerd[1976]: time="2025-05-17T00:24:14.997203955Z" level=info msg="CreateContainer within sandbox \"803e8296431489d593801dfbe3f73a20276d3e43e0fb1ab06f4fece81d25dcd8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da\"" May 17 00:24:14.997852 containerd[1976]: time="2025-05-17T00:24:14.997813122Z" level=info msg="StartContainer for \"29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da\"" May 17 00:24:15.028741 systemd[1]: Started cri-containerd-29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da.scope - libcontainer container 29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da. May 17 00:24:15.056678 containerd[1976]: time="2025-05-17T00:24:15.056615180Z" level=info msg="StartContainer for \"29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da\" returns successfully" May 17 00:24:15.875922 kubelet[3161]: I0517 00:24:15.874961 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-bssfg" podStartSLOduration=2.317583343 podStartE2EDuration="4.874940089s" podCreationTimestamp="2025-05-17 00:24:11 +0000 UTC" firstStartedPulling="2025-05-17 00:24:12.403061593 +0000 UTC m=+6.745522237" lastFinishedPulling="2025-05-17 00:24:14.96041835 +0000 UTC m=+9.302878983" observedRunningTime="2025-05-17 00:24:15.854740594 +0000 UTC m=+10.197201247" watchObservedRunningTime="2025-05-17 00:24:15.874940089 +0000 UTC m=+10.217400742" May 17 00:24:18.658660 update_engine[1962]: I20250517 00:24:18.658581 1962 update_attempter.cc:509] Updating boot flags... May 17 00:24:18.797338 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3539) May 17 00:24:19.610498 sudo[2302]: pam_unix(sudo:session): session closed for user root May 17 00:24:19.637377 sshd[2299]: pam_unix(sshd:session): session closed for user core May 17 00:24:19.642408 systemd[1]: sshd@6-172.31.18.208:22-147.75.109.163:35508.service: Deactivated successfully. May 17 00:24:19.642682 systemd-logind[1960]: Session 7 logged out. Waiting for processes to exit. May 17 00:24:19.645164 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:24:19.645557 systemd[1]: session-7.scope: Consumed 4.617s CPU time, 142.5M memory peak, 0B memory swap peak. May 17 00:24:19.649050 systemd-logind[1960]: Removed session 7. May 17 00:24:24.153545 systemd[1]: Created slice kubepods-besteffort-pod96f74f82_b6f8_4f3d_84f8_04278ccd8069.slice - libcontainer container kubepods-besteffort-pod96f74f82_b6f8_4f3d_84f8_04278ccd8069.slice. May 17 00:24:24.197499 kubelet[3161]: I0517 00:24:24.197453 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hj74\" (UniqueName: \"kubernetes.io/projected/96f74f82-b6f8-4f3d-84f8-04278ccd8069-kube-api-access-7hj74\") pod \"calico-typha-7b5c94df46-dc54r\" (UID: \"96f74f82-b6f8-4f3d-84f8-04278ccd8069\") " pod="calico-system/calico-typha-7b5c94df46-dc54r" May 17 00:24:24.197499 kubelet[3161]: I0517 00:24:24.197499 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96f74f82-b6f8-4f3d-84f8-04278ccd8069-tigera-ca-bundle\") pod \"calico-typha-7b5c94df46-dc54r\" (UID: \"96f74f82-b6f8-4f3d-84f8-04278ccd8069\") " pod="calico-system/calico-typha-7b5c94df46-dc54r" May 17 00:24:24.198089 kubelet[3161]: I0517 00:24:24.197523 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/96f74f82-b6f8-4f3d-84f8-04278ccd8069-typha-certs\") pod \"calico-typha-7b5c94df46-dc54r\" (UID: \"96f74f82-b6f8-4f3d-84f8-04278ccd8069\") " pod="calico-system/calico-typha-7b5c94df46-dc54r" May 17 00:24:24.459823 containerd[1976]: time="2025-05-17T00:24:24.459680222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b5c94df46-dc54r,Uid:96f74f82-b6f8-4f3d-84f8-04278ccd8069,Namespace:calico-system,Attempt:0,}" May 17 00:24:24.491278 systemd[1]: Created slice kubepods-besteffort-pod373277e0_7f49_4e36_ad9a_cc17c6f8a933.slice - libcontainer container kubepods-besteffort-pod373277e0_7f49_4e36_ad9a_cc17c6f8a933.slice. May 17 00:24:24.499334 kubelet[3161]: I0517 00:24:24.498703 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/373277e0-7f49-4e36-ad9a-cc17c6f8a933-flexvol-driver-host\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.499334 kubelet[3161]: I0517 00:24:24.498735 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/373277e0-7f49-4e36-ad9a-cc17c6f8a933-var-lib-calico\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.499334 kubelet[3161]: I0517 00:24:24.498753 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j252\" (UniqueName: \"kubernetes.io/projected/373277e0-7f49-4e36-ad9a-cc17c6f8a933-kube-api-access-4j252\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.499334 kubelet[3161]: I0517 00:24:24.498769 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/373277e0-7f49-4e36-ad9a-cc17c6f8a933-cni-bin-dir\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.499334 kubelet[3161]: I0517 00:24:24.498785 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/373277e0-7f49-4e36-ad9a-cc17c6f8a933-cni-log-dir\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.499563 kubelet[3161]: I0517 00:24:24.498798 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/373277e0-7f49-4e36-ad9a-cc17c6f8a933-xtables-lock\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.499563 kubelet[3161]: I0517 00:24:24.498815 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/373277e0-7f49-4e36-ad9a-cc17c6f8a933-lib-modules\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.499563 kubelet[3161]: I0517 00:24:24.498828 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/373277e0-7f49-4e36-ad9a-cc17c6f8a933-tigera-ca-bundle\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.499563 kubelet[3161]: I0517 00:24:24.498843 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/373277e0-7f49-4e36-ad9a-cc17c6f8a933-cni-net-dir\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.499563 kubelet[3161]: I0517 00:24:24.498860 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/373277e0-7f49-4e36-ad9a-cc17c6f8a933-var-run-calico\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.499692 kubelet[3161]: I0517 00:24:24.498875 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/373277e0-7f49-4e36-ad9a-cc17c6f8a933-node-certs\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.499692 kubelet[3161]: I0517 00:24:24.498890 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/373277e0-7f49-4e36-ad9a-cc17c6f8a933-policysync\") pod \"calico-node-g7fz7\" (UID: \"373277e0-7f49-4e36-ad9a-cc17c6f8a933\") " pod="calico-system/calico-node-g7fz7" May 17 00:24:24.508653 containerd[1976]: time="2025-05-17T00:24:24.508555486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:24.508764 containerd[1976]: time="2025-05-17T00:24:24.508668918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:24.508764 containerd[1976]: time="2025-05-17T00:24:24.508716938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:24.509498 containerd[1976]: time="2025-05-17T00:24:24.508881432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:24.570651 systemd[1]: Started cri-containerd-766e0f68f5da7723bbab7a698d8bb749ccfbd91f2b13ac7ac9be96940432b730.scope - libcontainer container 766e0f68f5da7723bbab7a698d8bb749ccfbd91f2b13ac7ac9be96940432b730. May 17 00:24:24.615639 kubelet[3161]: E0517 00:24:24.615602 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.615639 kubelet[3161]: W0517 00:24:24.615636 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.615639 kubelet[3161]: E0517 00:24:24.615669 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.617978 kubelet[3161]: E0517 00:24:24.617951 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.617978 kubelet[3161]: W0517 00:24:24.617971 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.618120 kubelet[3161]: E0517 00:24:24.617992 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.643720 containerd[1976]: time="2025-05-17T00:24:24.643062328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b5c94df46-dc54r,Uid:96f74f82-b6f8-4f3d-84f8-04278ccd8069,Namespace:calico-system,Attempt:0,} returns sandbox id \"766e0f68f5da7723bbab7a698d8bb749ccfbd91f2b13ac7ac9be96940432b730\"" May 17 00:24:24.645764 containerd[1976]: time="2025-05-17T00:24:24.645730154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:24:24.763091 kubelet[3161]: E0517 00:24:24.762393 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7knxl" podUID="154c5300-472e-444e-8595-31315d3f4aee" May 17 00:24:24.796414 containerd[1976]: time="2025-05-17T00:24:24.796372154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g7fz7,Uid:373277e0-7f49-4e36-ad9a-cc17c6f8a933,Namespace:calico-system,Attempt:0,}" May 17 00:24:24.797295 kubelet[3161]: E0517 00:24:24.797268 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.797295 kubelet[3161]: W0517 00:24:24.797288 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.797683 kubelet[3161]: E0517 00:24:24.797307 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.798035 kubelet[3161]: E0517 00:24:24.797944 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.798035 kubelet[3161]: W0517 00:24:24.797960 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.798035 kubelet[3161]: E0517 00:24:24.797975 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.798238 kubelet[3161]: E0517 00:24:24.798209 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.798238 kubelet[3161]: W0517 00:24:24.798223 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.798238 kubelet[3161]: E0517 00:24:24.798234 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.798635 kubelet[3161]: E0517 00:24:24.798619 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.798635 kubelet[3161]: W0517 00:24:24.798630 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.798635 kubelet[3161]: E0517 00:24:24.798640 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.799521 kubelet[3161]: E0517 00:24:24.799502 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.799521 kubelet[3161]: W0517 00:24:24.799520 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.799618 kubelet[3161]: E0517 00:24:24.799546 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.799848 kubelet[3161]: E0517 00:24:24.799822 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.799848 kubelet[3161]: W0517 00:24:24.799836 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.799848 kubelet[3161]: E0517 00:24:24.799847 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.800489 kubelet[3161]: E0517 00:24:24.800472 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.800541 kubelet[3161]: W0517 00:24:24.800489 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.800541 kubelet[3161]: E0517 00:24:24.800509 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.800718 kubelet[3161]: E0517 00:24:24.800705 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.800718 kubelet[3161]: W0517 00:24:24.800717 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.800793 kubelet[3161]: E0517 00:24:24.800727 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.800940 kubelet[3161]: E0517 00:24:24.800898 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.800940 kubelet[3161]: W0517 00:24:24.800905 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.800940 kubelet[3161]: E0517 00:24:24.800912 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.801080 kubelet[3161]: E0517 00:24:24.801053 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.801080 kubelet[3161]: W0517 00:24:24.801065 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.801080 kubelet[3161]: E0517 00:24:24.801072 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.801261 kubelet[3161]: E0517 00:24:24.801248 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.801290 kubelet[3161]: W0517 00:24:24.801262 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.801290 kubelet[3161]: E0517 00:24:24.801270 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.801545 kubelet[3161]: E0517 00:24:24.801524 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.801545 kubelet[3161]: W0517 00:24:24.801544 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.803778 kubelet[3161]: E0517 00:24:24.801553 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.803778 kubelet[3161]: E0517 00:24:24.801880 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.803778 kubelet[3161]: W0517 00:24:24.801889 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.803778 kubelet[3161]: E0517 00:24:24.801898 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.803778 kubelet[3161]: E0517 00:24:24.802779 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.803778 kubelet[3161]: W0517 00:24:24.802788 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.803778 kubelet[3161]: E0517 00:24:24.802798 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.803778 kubelet[3161]: E0517 00:24:24.803002 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.803778 kubelet[3161]: W0517 00:24:24.803009 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.803778 kubelet[3161]: E0517 00:24:24.803018 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.804039 kubelet[3161]: E0517 00:24:24.803218 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.804039 kubelet[3161]: W0517 00:24:24.803226 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.804039 kubelet[3161]: E0517 00:24:24.803236 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.804039 kubelet[3161]: E0517 00:24:24.803441 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.804039 kubelet[3161]: W0517 00:24:24.803448 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.804039 kubelet[3161]: E0517 00:24:24.803455 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.804039 kubelet[3161]: E0517 00:24:24.803716 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.804039 kubelet[3161]: W0517 00:24:24.803725 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.804039 kubelet[3161]: E0517 00:24:24.803734 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.806670 kubelet[3161]: E0517 00:24:24.804309 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.806670 kubelet[3161]: W0517 00:24:24.804322 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.806670 kubelet[3161]: E0517 00:24:24.804333 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.807668 kubelet[3161]: E0517 00:24:24.807645 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.807668 kubelet[3161]: W0517 00:24:24.807665 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.807752 kubelet[3161]: E0517 00:24:24.807677 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.807998 kubelet[3161]: E0517 00:24:24.807984 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.807998 kubelet[3161]: W0517 00:24:24.807997 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.808060 kubelet[3161]: E0517 00:24:24.808006 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.808060 kubelet[3161]: I0517 00:24:24.808044 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/154c5300-472e-444e-8595-31315d3f4aee-socket-dir\") pod \"csi-node-driver-7knxl\" (UID: \"154c5300-472e-444e-8595-31315d3f4aee\") " pod="calico-system/csi-node-driver-7knxl" May 17 00:24:24.808245 kubelet[3161]: E0517 00:24:24.808232 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.808277 kubelet[3161]: W0517 00:24:24.808245 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.808277 kubelet[3161]: E0517 00:24:24.808262 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.808325 kubelet[3161]: I0517 00:24:24.808275 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/154c5300-472e-444e-8595-31315d3f4aee-registration-dir\") pod \"csi-node-driver-7knxl\" (UID: \"154c5300-472e-444e-8595-31315d3f4aee\") " pod="calico-system/csi-node-driver-7knxl" May 17 00:24:24.808473 kubelet[3161]: E0517 00:24:24.808461 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.808498 kubelet[3161]: W0517 00:24:24.808473 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.808498 kubelet[3161]: E0517 00:24:24.808487 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.808579 kubelet[3161]: I0517 00:24:24.808513 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/154c5300-472e-444e-8595-31315d3f4aee-kubelet-dir\") pod \"csi-node-driver-7knxl\" (UID: \"154c5300-472e-444e-8595-31315d3f4aee\") " pod="calico-system/csi-node-driver-7knxl" May 17 00:24:24.809078 kubelet[3161]: E0517 00:24:24.809061 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.809078 kubelet[3161]: W0517 00:24:24.809076 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.809192 kubelet[3161]: E0517 00:24:24.809180 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.809222 kubelet[3161]: I0517 00:24:24.809202 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/154c5300-472e-444e-8595-31315d3f4aee-varrun\") pod \"csi-node-driver-7knxl\" (UID: \"154c5300-472e-444e-8595-31315d3f4aee\") " pod="calico-system/csi-node-driver-7knxl" May 17 00:24:24.810101 kubelet[3161]: E0517 00:24:24.810082 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.810101 kubelet[3161]: W0517 00:24:24.810098 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.810186 kubelet[3161]: E0517 00:24:24.810174 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.810214 kubelet[3161]: I0517 00:24:24.810196 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45mqs\" (UniqueName: \"kubernetes.io/projected/154c5300-472e-444e-8595-31315d3f4aee-kube-api-access-45mqs\") pod \"csi-node-driver-7knxl\" (UID: \"154c5300-472e-444e-8595-31315d3f4aee\") " pod="calico-system/csi-node-driver-7knxl" May 17 00:24:24.810363 kubelet[3161]: E0517 00:24:24.810351 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.810393 kubelet[3161]: W0517 00:24:24.810362 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.810496 kubelet[3161]: E0517 00:24:24.810436 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.810567 kubelet[3161]: E0517 00:24:24.810556 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.810567 kubelet[3161]: W0517 00:24:24.810566 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.812023 kubelet[3161]: E0517 00:24:24.810644 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.812023 kubelet[3161]: E0517 00:24:24.811974 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.812023 kubelet[3161]: W0517 00:24:24.811983 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.812771 kubelet[3161]: E0517 00:24:24.812552 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.812771 kubelet[3161]: W0517 00:24:24.812564 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.812990 kubelet[3161]: E0517 00:24:24.812963 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.812990 kubelet[3161]: W0517 00:24:24.812979 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.813633 kubelet[3161]: E0517 00:24:24.813617 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.813633 kubelet[3161]: E0517 00:24:24.813633 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.813715 kubelet[3161]: E0517 00:24:24.813642 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.814044 kubelet[3161]: E0517 00:24:24.814010 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.814044 kubelet[3161]: W0517 00:24:24.814023 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.814044 kubelet[3161]: E0517 00:24:24.814032 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.815695 kubelet[3161]: E0517 00:24:24.814827 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.815695 kubelet[3161]: W0517 00:24:24.814839 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.815695 kubelet[3161]: E0517 00:24:24.814851 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.816686 kubelet[3161]: E0517 00:24:24.816668 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.816929 kubelet[3161]: W0517 00:24:24.816911 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.816970 kubelet[3161]: E0517 00:24:24.816934 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.817892 kubelet[3161]: E0517 00:24:24.817876 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.817892 kubelet[3161]: W0517 00:24:24.817890 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.817978 kubelet[3161]: E0517 00:24:24.817900 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.818109 kubelet[3161]: E0517 00:24:24.818096 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.818109 kubelet[3161]: W0517 00:24:24.818108 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.818175 kubelet[3161]: E0517 00:24:24.818116 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.846405 containerd[1976]: time="2025-05-17T00:24:24.846295457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:24.846672 containerd[1976]: time="2025-05-17T00:24:24.846482679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:24.848381 containerd[1976]: time="2025-05-17T00:24:24.847434177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:24.848724 containerd[1976]: time="2025-05-17T00:24:24.848652272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:24.867843 systemd[1]: Started cri-containerd-f5c2deca964e0a929a8da0fe246c06f2e618bada5ed363973d3bb5aa7eb90bb5.scope - libcontainer container f5c2deca964e0a929a8da0fe246c06f2e618bada5ed363973d3bb5aa7eb90bb5. May 17 00:24:24.911407 kubelet[3161]: E0517 00:24:24.911375 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.911407 kubelet[3161]: W0517 00:24:24.911396 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.911407 kubelet[3161]: E0517 00:24:24.911416 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.911695 kubelet[3161]: E0517 00:24:24.911636 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.911695 kubelet[3161]: W0517 00:24:24.911644 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.911695 kubelet[3161]: E0517 00:24:24.911662 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.912795 kubelet[3161]: E0517 00:24:24.912775 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.912795 kubelet[3161]: W0517 00:24:24.912791 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.912942 kubelet[3161]: E0517 00:24:24.912816 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.913075 kubelet[3161]: E0517 00:24:24.913062 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.913075 kubelet[3161]: W0517 00:24:24.913074 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.913215 kubelet[3161]: E0517 00:24:24.913141 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.913249 kubelet[3161]: E0517 00:24:24.913228 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.913249 kubelet[3161]: W0517 00:24:24.913234 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.913384 kubelet[3161]: E0517 00:24:24.913364 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.913419 kubelet[3161]: E0517 00:24:24.913411 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.913419 kubelet[3161]: W0517 00:24:24.913417 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.913518 kubelet[3161]: E0517 00:24:24.913444 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.913607 kubelet[3161]: E0517 00:24:24.913595 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.913607 kubelet[3161]: W0517 00:24:24.913606 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.913686 kubelet[3161]: E0517 00:24:24.913619 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.913778 kubelet[3161]: E0517 00:24:24.913767 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.913778 kubelet[3161]: W0517 00:24:24.913777 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.913852 kubelet[3161]: E0517 00:24:24.913794 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.913984 kubelet[3161]: E0517 00:24:24.913966 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.913984 kubelet[3161]: W0517 00:24:24.913977 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.914056 kubelet[3161]: E0517 00:24:24.913991 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.914425 kubelet[3161]: E0517 00:24:24.914341 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.914425 kubelet[3161]: W0517 00:24:24.914354 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.914425 kubelet[3161]: E0517 00:24:24.914371 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.914969 kubelet[3161]: E0517 00:24:24.914943 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.914969 kubelet[3161]: W0517 00:24:24.914958 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.914969 kubelet[3161]: E0517 00:24:24.914974 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.915151 kubelet[3161]: E0517 00:24:24.915139 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.915151 kubelet[3161]: W0517 00:24:24.915149 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.915589 kubelet[3161]: E0517 00:24:24.915306 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.915589 kubelet[3161]: E0517 00:24:24.915387 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.915589 kubelet[3161]: W0517 00:24:24.915394 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.915589 kubelet[3161]: E0517 00:24:24.915520 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.915826 kubelet[3161]: E0517 00:24:24.915812 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.915826 kubelet[3161]: W0517 00:24:24.915825 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.915928 kubelet[3161]: E0517 00:24:24.915876 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.916330 kubelet[3161]: E0517 00:24:24.916312 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.916330 kubelet[3161]: W0517 00:24:24.916328 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.916649 kubelet[3161]: E0517 00:24:24.916438 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.917786 kubelet[3161]: E0517 00:24:24.917762 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.917786 kubelet[3161]: W0517 00:24:24.917777 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.918614 kubelet[3161]: E0517 00:24:24.917863 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.919189 kubelet[3161]: E0517 00:24:24.919168 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.919189 kubelet[3161]: W0517 00:24:24.919184 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.919358 kubelet[3161]: E0517 00:24:24.919287 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.919503 kubelet[3161]: E0517 00:24:24.919481 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.919644 kubelet[3161]: W0517 00:24:24.919631 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.919723 kubelet[3161]: E0517 00:24:24.919711 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.920338 kubelet[3161]: E0517 00:24:24.920321 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.920338 kubelet[3161]: W0517 00:24:24.920336 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.920338 kubelet[3161]: E0517 00:24:24.920361 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.920513 kubelet[3161]: E0517 00:24:24.920493 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.920513 kubelet[3161]: W0517 00:24:24.920499 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.920598 kubelet[3161]: E0517 00:24:24.920579 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.920759 kubelet[3161]: E0517 00:24:24.920746 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.920759 kubelet[3161]: W0517 00:24:24.920757 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.920842 kubelet[3161]: E0517 00:24:24.920768 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.920962 kubelet[3161]: E0517 00:24:24.920949 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.920962 kubelet[3161]: W0517 00:24:24.920960 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.922801 kubelet[3161]: E0517 00:24:24.922565 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.922801 kubelet[3161]: E0517 00:24:24.922779 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.922801 kubelet[3161]: W0517 00:24:24.922789 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.922934 kubelet[3161]: E0517 00:24:24.922807 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.923283 kubelet[3161]: E0517 00:24:24.922971 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.923283 kubelet[3161]: W0517 00:24:24.922977 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.923283 kubelet[3161]: E0517 00:24:24.922985 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.923283 kubelet[3161]: E0517 00:24:24.923163 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.923283 kubelet[3161]: W0517 00:24:24.923169 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.923283 kubelet[3161]: E0517 00:24:24.923176 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.939118 kubelet[3161]: E0517 00:24:24.939091 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:24.939118 kubelet[3161]: W0517 00:24:24.939111 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:24.939258 kubelet[3161]: E0517 00:24:24.939129 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:24.978900 containerd[1976]: time="2025-05-17T00:24:24.978857868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g7fz7,Uid:373277e0-7f49-4e36-ad9a-cc17c6f8a933,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5c2deca964e0a929a8da0fe246c06f2e618bada5ed363973d3bb5aa7eb90bb5\"" May 17 00:24:26.457668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2455767123.mount: Deactivated successfully. May 17 00:24:26.776955 kubelet[3161]: E0517 00:24:26.776909 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7knxl" podUID="154c5300-472e-444e-8595-31315d3f4aee" May 17 00:24:27.423540 containerd[1976]: time="2025-05-17T00:24:27.423469158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:27.425254 containerd[1976]: time="2025-05-17T00:24:27.425203818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:24:27.427383 containerd[1976]: time="2025-05-17T00:24:27.427285725Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:27.431428 containerd[1976]: time="2025-05-17T00:24:27.431368161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:27.432469 containerd[1976]: time="2025-05-17T00:24:27.432108584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.78634199s" May 17 00:24:27.432469 containerd[1976]: time="2025-05-17T00:24:27.432142269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:24:27.433360 containerd[1976]: time="2025-05-17T00:24:27.433192854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:24:27.454830 containerd[1976]: time="2025-05-17T00:24:27.454757290Z" level=info msg="CreateContainer within sandbox \"766e0f68f5da7723bbab7a698d8bb749ccfbd91f2b13ac7ac9be96940432b730\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:24:27.497461 containerd[1976]: time="2025-05-17T00:24:27.497413434Z" level=info msg="CreateContainer within sandbox \"766e0f68f5da7723bbab7a698d8bb749ccfbd91f2b13ac7ac9be96940432b730\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"94cf28f596fd6abe3146d8dffae127093b6591fb2b6ffa36b5b012aa11d15f4f\"" May 17 00:24:27.498161 containerd[1976]: time="2025-05-17T00:24:27.498074532Z" level=info msg="StartContainer for \"94cf28f596fd6abe3146d8dffae127093b6591fb2b6ffa36b5b012aa11d15f4f\"" May 17 00:24:27.556693 systemd[1]: Started cri-containerd-94cf28f596fd6abe3146d8dffae127093b6591fb2b6ffa36b5b012aa11d15f4f.scope - libcontainer container 94cf28f596fd6abe3146d8dffae127093b6591fb2b6ffa36b5b012aa11d15f4f. May 17 00:24:27.603138 containerd[1976]: time="2025-05-17T00:24:27.603085290Z" level=info msg="StartContainer for \"94cf28f596fd6abe3146d8dffae127093b6591fb2b6ffa36b5b012aa11d15f4f\" returns successfully" May 17 00:24:27.926386 kubelet[3161]: E0517 00:24:27.926349 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.926386 kubelet[3161]: W0517 00:24:27.926378 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.927043 kubelet[3161]: E0517 00:24:27.926404 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.927401 kubelet[3161]: E0517 00:24:27.927365 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.927401 kubelet[3161]: W0517 00:24:27.927385 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.927550 kubelet[3161]: E0517 00:24:27.927405 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.928430 kubelet[3161]: E0517 00:24:27.928406 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.928514 kubelet[3161]: W0517 00:24:27.928427 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.928514 kubelet[3161]: E0517 00:24:27.928459 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.929023 kubelet[3161]: E0517 00:24:27.929006 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.929023 kubelet[3161]: W0517 00:24:27.929023 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.929137 kubelet[3161]: E0517 00:24:27.929038 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.929707 kubelet[3161]: E0517 00:24:27.929692 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.929882 kubelet[3161]: W0517 00:24:27.929791 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.929882 kubelet[3161]: E0517 00:24:27.929809 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.930624 kubelet[3161]: E0517 00:24:27.930488 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.930624 kubelet[3161]: W0517 00:24:27.930503 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.930624 kubelet[3161]: E0517 00:24:27.930516 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.931140 kubelet[3161]: E0517 00:24:27.930930 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.931140 kubelet[3161]: W0517 00:24:27.930945 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.931140 kubelet[3161]: E0517 00:24:27.930959 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.931461 kubelet[3161]: E0517 00:24:27.931362 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.931461 kubelet[3161]: W0517 00:24:27.931377 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.931461 kubelet[3161]: E0517 00:24:27.931390 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.931927 kubelet[3161]: E0517 00:24:27.931854 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.931927 kubelet[3161]: W0517 00:24:27.931867 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.931927 kubelet[3161]: E0517 00:24:27.931882 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.932804 kubelet[3161]: E0517 00:24:27.932398 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.932804 kubelet[3161]: W0517 00:24:27.932436 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.932804 kubelet[3161]: E0517 00:24:27.932452 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.933163 kubelet[3161]: E0517 00:24:27.933048 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.933163 kubelet[3161]: W0517 00:24:27.933082 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.933163 kubelet[3161]: E0517 00:24:27.933096 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.934175 kubelet[3161]: E0517 00:24:27.934025 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.934175 kubelet[3161]: W0517 00:24:27.934051 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.934175 kubelet[3161]: E0517 00:24:27.934065 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.934429 kubelet[3161]: E0517 00:24:27.934418 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.934608 kubelet[3161]: W0517 00:24:27.934496 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.934608 kubelet[3161]: E0517 00:24:27.934512 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.935212 kubelet[3161]: E0517 00:24:27.935090 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.935212 kubelet[3161]: W0517 00:24:27.935104 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.935212 kubelet[3161]: E0517 00:24:27.935119 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.935615 kubelet[3161]: E0517 00:24:27.935521 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.935615 kubelet[3161]: W0517 00:24:27.935576 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.935615 kubelet[3161]: E0517 00:24:27.935592 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.936384 kubelet[3161]: E0517 00:24:27.936268 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.936384 kubelet[3161]: W0517 00:24:27.936285 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.936384 kubelet[3161]: E0517 00:24:27.936299 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.936903 kubelet[3161]: E0517 00:24:27.936890 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.937101 kubelet[3161]: W0517 00:24:27.936988 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.937101 kubelet[3161]: E0517 00:24:27.937021 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.937516 kubelet[3161]: E0517 00:24:27.937423 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.937516 kubelet[3161]: W0517 00:24:27.937436 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.937516 kubelet[3161]: E0517 00:24:27.937466 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.939107 kubelet[3161]: E0517 00:24:27.938999 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.939107 kubelet[3161]: W0517 00:24:27.939012 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.939285 kubelet[3161]: E0517 00:24:27.939192 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.939477 kubelet[3161]: E0517 00:24:27.939392 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.939477 kubelet[3161]: W0517 00:24:27.939402 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.939477 kubelet[3161]: E0517 00:24:27.939457 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.940243 kubelet[3161]: E0517 00:24:27.940165 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.940243 kubelet[3161]: W0517 00:24:27.940178 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.940243 kubelet[3161]: E0517 00:24:27.940261 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.940764 kubelet[3161]: E0517 00:24:27.940655 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.940764 kubelet[3161]: W0517 00:24:27.940669 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.940764 kubelet[3161]: E0517 00:24:27.940685 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.941227 kubelet[3161]: E0517 00:24:27.941093 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.941227 kubelet[3161]: W0517 00:24:27.941106 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.941227 kubelet[3161]: E0517 00:24:27.941198 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.941607 kubelet[3161]: E0517 00:24:27.941507 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.941607 kubelet[3161]: W0517 00:24:27.941520 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.941923 kubelet[3161]: E0517 00:24:27.941744 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.942422 kubelet[3161]: E0517 00:24:27.942390 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.942422 kubelet[3161]: W0517 00:24:27.942405 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.943421 kubelet[3161]: E0517 00:24:27.943184 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.943421 kubelet[3161]: E0517 00:24:27.943372 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.943421 kubelet[3161]: W0517 00:24:27.943382 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.945231 kubelet[3161]: E0517 00:24:27.944740 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.945476 kubelet[3161]: E0517 00:24:27.945372 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.945476 kubelet[3161]: W0517 00:24:27.945385 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.945774 kubelet[3161]: E0517 00:24:27.945674 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.945774 kubelet[3161]: W0517 00:24:27.945686 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.945970 kubelet[3161]: E0517 00:24:27.945920 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.945970 kubelet[3161]: E0517 00:24:27.945950 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.946132 kubelet[3161]: E0517 00:24:27.946099 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.946132 kubelet[3161]: W0517 00:24:27.946109 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.946881 kubelet[3161]: E0517 00:24:27.946491 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.946881 kubelet[3161]: W0517 00:24:27.946502 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.946881 kubelet[3161]: E0517 00:24:27.946515 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.947062 kubelet[3161]: E0517 00:24:27.947048 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.947223 kubelet[3161]: E0517 00:24:27.947213 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.947296 kubelet[3161]: W0517 00:24:27.947286 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.947429 kubelet[3161]: E0517 00:24:27.947418 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.947742 kubelet[3161]: E0517 00:24:27.947730 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.947831 kubelet[3161]: W0517 00:24:27.947821 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.947899 kubelet[3161]: E0517 00:24:27.947889 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:27.949812 kubelet[3161]: E0517 00:24:27.949759 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:27.949812 kubelet[3161]: W0517 00:24:27.949773 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:27.949812 kubelet[3161]: E0517 00:24:27.949786 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.441378 systemd[1]: run-containerd-runc-k8s.io-94cf28f596fd6abe3146d8dffae127093b6591fb2b6ffa36b5b012aa11d15f4f-runc.PADaJY.mount: Deactivated successfully. May 17 00:24:28.777560 kubelet[3161]: E0517 00:24:28.776278 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7knxl" podUID="154c5300-472e-444e-8595-31315d3f4aee" May 17 00:24:28.878865 kubelet[3161]: I0517 00:24:28.878837 3161 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:28.942704 kubelet[3161]: E0517 00:24:28.942477 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.942704 kubelet[3161]: W0517 00:24:28.942506 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.942704 kubelet[3161]: E0517 00:24:28.942555 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.945358 kubelet[3161]: E0517 00:24:28.943822 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.945358 kubelet[3161]: W0517 00:24:28.943843 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.945358 kubelet[3161]: E0517 00:24:28.943969 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.945358 kubelet[3161]: E0517 00:24:28.944304 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.945358 kubelet[3161]: W0517 00:24:28.944317 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.945358 kubelet[3161]: E0517 00:24:28.944333 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.946160 kubelet[3161]: E0517 00:24:28.945930 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.946160 kubelet[3161]: W0517 00:24:28.945944 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.946160 kubelet[3161]: E0517 00:24:28.945961 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.946599 kubelet[3161]: E0517 00:24:28.946442 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.946599 kubelet[3161]: W0517 00:24:28.946456 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.946599 kubelet[3161]: E0517 00:24:28.946471 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.947047 kubelet[3161]: E0517 00:24:28.946875 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.947047 kubelet[3161]: W0517 00:24:28.946889 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.947047 kubelet[3161]: E0517 00:24:28.946901 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.948018 kubelet[3161]: E0517 00:24:28.947517 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.948018 kubelet[3161]: W0517 00:24:28.947542 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.948018 kubelet[3161]: E0517 00:24:28.947557 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.948289 kubelet[3161]: E0517 00:24:28.948276 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.948471 kubelet[3161]: W0517 00:24:28.948385 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.948471 kubelet[3161]: E0517 00:24:28.948409 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.949828 kubelet[3161]: E0517 00:24:28.949582 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.949828 kubelet[3161]: W0517 00:24:28.949597 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.949828 kubelet[3161]: E0517 00:24:28.949611 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.950566 kubelet[3161]: E0517 00:24:28.950241 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.950566 kubelet[3161]: W0517 00:24:28.950255 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.950566 kubelet[3161]: E0517 00:24:28.950268 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.951070 kubelet[3161]: E0517 00:24:28.950902 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.951070 kubelet[3161]: W0517 00:24:28.950915 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.951070 kubelet[3161]: E0517 00:24:28.950937 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.952332 kubelet[3161]: E0517 00:24:28.952026 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.952332 kubelet[3161]: W0517 00:24:28.952041 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.952332 kubelet[3161]: E0517 00:24:28.952055 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.952876 kubelet[3161]: E0517 00:24:28.952662 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.952876 kubelet[3161]: W0517 00:24:28.952680 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.952876 kubelet[3161]: E0517 00:24:28.952695 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.953300 kubelet[3161]: E0517 00:24:28.953092 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.953300 kubelet[3161]: W0517 00:24:28.953104 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.953300 kubelet[3161]: E0517 00:24:28.953118 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.953702 kubelet[3161]: E0517 00:24:28.953581 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:28.953702 kubelet[3161]: W0517 00:24:28.953597 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:28.953702 kubelet[3161]: E0517 00:24:28.953610 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:28.972846 containerd[1976]: time="2025-05-17T00:24:28.972712354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:28.975135 containerd[1976]: time="2025-05-17T00:24:28.975080202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:24:28.977631 containerd[1976]: time="2025-05-17T00:24:28.977223641Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:28.981040 containerd[1976]: time="2025-05-17T00:24:28.981003315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:28.981768 containerd[1976]: time="2025-05-17T00:24:28.981727491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.548501853s" May 17 00:24:28.981873 containerd[1976]: time="2025-05-17T00:24:28.981773650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:24:28.985325 containerd[1976]: time="2025-05-17T00:24:28.985275500Z" level=info msg="CreateContainer within sandbox \"f5c2deca964e0a929a8da0fe246c06f2e618bada5ed363973d3bb5aa7eb90bb5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:24:29.018999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount659647644.mount: Deactivated successfully. May 17 00:24:29.030996 containerd[1976]: time="2025-05-17T00:24:29.029751021Z" level=info msg="CreateContainer within sandbox \"f5c2deca964e0a929a8da0fe246c06f2e618bada5ed363973d3bb5aa7eb90bb5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e72aa6a87a85c4ac3fe5cdf293adac159b4b86513cb109c6cca24bc82d7201a8\"" May 17 00:24:29.031143 containerd[1976]: time="2025-05-17T00:24:29.031116148Z" level=info msg="StartContainer for \"e72aa6a87a85c4ac3fe5cdf293adac159b4b86513cb109c6cca24bc82d7201a8\"" May 17 00:24:29.049805 kubelet[3161]: E0517 00:24:29.049647 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.049805 kubelet[3161]: W0517 00:24:29.049671 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.049805 kubelet[3161]: E0517 00:24:29.049691 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.049990 kubelet[3161]: E0517 00:24:29.049919 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.049990 kubelet[3161]: W0517 00:24:29.049934 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.049990 kubelet[3161]: E0517 00:24:29.049949 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.050432 kubelet[3161]: E0517 00:24:29.050214 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.050432 kubelet[3161]: W0517 00:24:29.050247 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.050432 kubelet[3161]: E0517 00:24:29.050267 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.051092 kubelet[3161]: E0517 00:24:29.050505 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.051092 kubelet[3161]: W0517 00:24:29.050514 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.051092 kubelet[3161]: E0517 00:24:29.050542 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.051092 kubelet[3161]: E0517 00:24:29.050765 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.051092 kubelet[3161]: W0517 00:24:29.050773 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.051092 kubelet[3161]: E0517 00:24:29.050786 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.051092 kubelet[3161]: E0517 00:24:29.050965 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.051092 kubelet[3161]: W0517 00:24:29.050972 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.051092 kubelet[3161]: E0517 00:24:29.050986 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.051368 kubelet[3161]: E0517 00:24:29.051349 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.051368 kubelet[3161]: W0517 00:24:29.051365 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.051580 kubelet[3161]: E0517 00:24:29.051471 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.051816 kubelet[3161]: E0517 00:24:29.051730 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.051816 kubelet[3161]: W0517 00:24:29.051742 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.051928 kubelet[3161]: E0517 00:24:29.051908 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.052183 kubelet[3161]: E0517 00:24:29.052171 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.052454 kubelet[3161]: W0517 00:24:29.052337 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.052454 kubelet[3161]: E0517 00:24:29.052397 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.052911 kubelet[3161]: E0517 00:24:29.052815 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.052911 kubelet[3161]: W0517 00:24:29.052826 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.052911 kubelet[3161]: E0517 00:24:29.052840 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.053194 kubelet[3161]: E0517 00:24:29.053177 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.053194 kubelet[3161]: W0517 00:24:29.053191 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.053459 kubelet[3161]: E0517 00:24:29.053354 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.053519 kubelet[3161]: E0517 00:24:29.053509 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.053591 kubelet[3161]: W0517 00:24:29.053522 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.054670 kubelet[3161]: E0517 00:24:29.054575 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.054924 kubelet[3161]: E0517 00:24:29.054913 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.055170 kubelet[3161]: W0517 00:24:29.055019 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.055170 kubelet[3161]: E0517 00:24:29.055108 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.055282 kubelet[3161]: E0517 00:24:29.055267 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.055313 kubelet[3161]: W0517 00:24:29.055281 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.055313 kubelet[3161]: E0517 00:24:29.055295 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.055620 kubelet[3161]: E0517 00:24:29.055604 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.055620 kubelet[3161]: W0517 00:24:29.055616 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.055686 kubelet[3161]: E0517 00:24:29.055635 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.055838 kubelet[3161]: E0517 00:24:29.055822 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.055838 kubelet[3161]: W0517 00:24:29.055831 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.055905 kubelet[3161]: E0517 00:24:29.055840 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.059315 kubelet[3161]: E0517 00:24:29.056208 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.059315 kubelet[3161]: W0517 00:24:29.056220 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.059315 kubelet[3161]: E0517 00:24:29.056230 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.059315 kubelet[3161]: E0517 00:24:29.056696 3161 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:29.059315 kubelet[3161]: W0517 00:24:29.056705 3161 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:29.059315 kubelet[3161]: E0517 00:24:29.056715 3161 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:29.067839 systemd[1]: Started cri-containerd-e72aa6a87a85c4ac3fe5cdf293adac159b4b86513cb109c6cca24bc82d7201a8.scope - libcontainer container e72aa6a87a85c4ac3fe5cdf293adac159b4b86513cb109c6cca24bc82d7201a8. May 17 00:24:29.105420 containerd[1976]: time="2025-05-17T00:24:29.105382762Z" level=info msg="StartContainer for \"e72aa6a87a85c4ac3fe5cdf293adac159b4b86513cb109c6cca24bc82d7201a8\" returns successfully" May 17 00:24:29.115394 systemd[1]: cri-containerd-e72aa6a87a85c4ac3fe5cdf293adac159b4b86513cb109c6cca24bc82d7201a8.scope: Deactivated successfully. May 17 00:24:29.273721 containerd[1976]: time="2025-05-17T00:24:29.273595538Z" level=info msg="shim disconnected" id=e72aa6a87a85c4ac3fe5cdf293adac159b4b86513cb109c6cca24bc82d7201a8 namespace=k8s.io May 17 00:24:29.273721 containerd[1976]: time="2025-05-17T00:24:29.273701681Z" level=warning msg="cleaning up after shim disconnected" id=e72aa6a87a85c4ac3fe5cdf293adac159b4b86513cb109c6cca24bc82d7201a8 namespace=k8s.io May 17 00:24:29.273721 containerd[1976]: time="2025-05-17T00:24:29.273713979Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:29.286722 containerd[1976]: time="2025-05-17T00:24:29.286204063Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:24:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:24:29.441119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e72aa6a87a85c4ac3fe5cdf293adac159b4b86513cb109c6cca24bc82d7201a8-rootfs.mount: Deactivated successfully. May 17 00:24:29.883519 containerd[1976]: time="2025-05-17T00:24:29.883474753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:24:29.901455 kubelet[3161]: I0517 00:24:29.901389 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b5c94df46-dc54r" podStartSLOduration=3.113457539 podStartE2EDuration="5.901373324s" podCreationTimestamp="2025-05-17 00:24:24 +0000 UTC" firstStartedPulling="2025-05-17 00:24:24.645056171 +0000 UTC m=+18.987516818" lastFinishedPulling="2025-05-17 00:24:27.43297197 +0000 UTC m=+21.775432603" observedRunningTime="2025-05-17 00:24:27.895666238 +0000 UTC m=+22.238126891" watchObservedRunningTime="2025-05-17 00:24:29.901373324 +0000 UTC m=+24.243833992" May 17 00:24:30.779219 kubelet[3161]: E0517 00:24:30.779160 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7knxl" podUID="154c5300-472e-444e-8595-31315d3f4aee" May 17 00:24:32.776496 kubelet[3161]: E0517 00:24:32.776353 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7knxl" podUID="154c5300-472e-444e-8595-31315d3f4aee" May 17 00:24:33.503918 containerd[1976]: time="2025-05-17T00:24:33.503859483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:33.504950 containerd[1976]: time="2025-05-17T00:24:33.504904440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:24:33.506063 containerd[1976]: time="2025-05-17T00:24:33.505992089Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:33.508886 containerd[1976]: time="2025-05-17T00:24:33.508832871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:33.510201 containerd[1976]: time="2025-05-17T00:24:33.509671625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.626147959s" May 17 00:24:33.510201 containerd[1976]: time="2025-05-17T00:24:33.509743705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:24:33.513573 containerd[1976]: time="2025-05-17T00:24:33.513519259Z" level=info msg="CreateContainer within sandbox \"f5c2deca964e0a929a8da0fe246c06f2e618bada5ed363973d3bb5aa7eb90bb5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:24:33.537762 containerd[1976]: time="2025-05-17T00:24:33.537712385Z" level=info msg="CreateContainer within sandbox \"f5c2deca964e0a929a8da0fe246c06f2e618bada5ed363973d3bb5aa7eb90bb5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bde7fef18e8b1abc127793fcd3d79c66aefb0e42476bbfce1184dd9e0c9ce327\"" May 17 00:24:33.538338 containerd[1976]: time="2025-05-17T00:24:33.538310575Z" level=info msg="StartContainer for \"bde7fef18e8b1abc127793fcd3d79c66aefb0e42476bbfce1184dd9e0c9ce327\"" May 17 00:24:33.578716 systemd[1]: Started cri-containerd-bde7fef18e8b1abc127793fcd3d79c66aefb0e42476bbfce1184dd9e0c9ce327.scope - libcontainer container bde7fef18e8b1abc127793fcd3d79c66aefb0e42476bbfce1184dd9e0c9ce327. May 17 00:24:33.615832 containerd[1976]: time="2025-05-17T00:24:33.615780131Z" level=info msg="StartContainer for \"bde7fef18e8b1abc127793fcd3d79c66aefb0e42476bbfce1184dd9e0c9ce327\" returns successfully" May 17 00:24:34.702412 systemd[1]: cri-containerd-bde7fef18e8b1abc127793fcd3d79c66aefb0e42476bbfce1184dd9e0c9ce327.scope: Deactivated successfully. May 17 00:24:34.731913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bde7fef18e8b1abc127793fcd3d79c66aefb0e42476bbfce1184dd9e0c9ce327-rootfs.mount: Deactivated successfully. May 17 00:24:34.738995 containerd[1976]: time="2025-05-17T00:24:34.738935509Z" level=info msg="shim disconnected" id=bde7fef18e8b1abc127793fcd3d79c66aefb0e42476bbfce1184dd9e0c9ce327 namespace=k8s.io May 17 00:24:34.738995 containerd[1976]: time="2025-05-17T00:24:34.738989961Z" level=warning msg="cleaning up after shim disconnected" id=bde7fef18e8b1abc127793fcd3d79c66aefb0e42476bbfce1184dd9e0c9ce327 namespace=k8s.io May 17 00:24:34.738995 containerd[1976]: time="2025-05-17T00:24:34.739000835Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:34.751824 kubelet[3161]: I0517 00:24:34.751588 3161 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:24:34.757563 containerd[1976]: time="2025-05-17T00:24:34.757491973Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:24:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:24:34.782781 systemd[1]: Created slice kubepods-besteffort-pod154c5300_472e_444e_8595_31315d3f4aee.slice - libcontainer container kubepods-besteffort-pod154c5300_472e_444e_8595_31315d3f4aee.slice. May 17 00:24:34.788526 containerd[1976]: time="2025-05-17T00:24:34.788481079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7knxl,Uid:154c5300-472e-444e-8595-31315d3f4aee,Namespace:calico-system,Attempt:0,}" May 17 00:24:34.804868 systemd[1]: Created slice kubepods-burstable-pode41e279e_d875_4866_b909_66b33f148bb6.slice - libcontainer container kubepods-burstable-pode41e279e_d875_4866_b909_66b33f148bb6.slice. May 17 00:24:34.827354 systemd[1]: Created slice kubepods-burstable-pod9cde766c_cf7a_4494_a1ab_ccbb03aa389f.slice - libcontainer container kubepods-burstable-pod9cde766c_cf7a_4494_a1ab_ccbb03aa389f.slice. May 17 00:24:34.838565 kubelet[3161]: I0517 00:24:34.836269 3161 status_manager.go:890] "Failed to get status for pod" podUID="e41e279e-d875-4866-b909-66b33f148bb6" pod="kube-system/coredns-668d6bf9bc-klpmb" err="pods \"coredns-668d6bf9bc-klpmb\" is forbidden: User \"system:node:ip-172-31-18-208\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-208' and this object" May 17 00:24:34.849769 kubelet[3161]: W0517 00:24:34.840025 3161 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-18-208" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-208' and this object May 17 00:24:34.860511 kubelet[3161]: E0517 00:24:34.858591 3161 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-18-208\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-208' and this object" logger="UnhandledError" May 17 00:24:34.865181 systemd[1]: Created slice kubepods-besteffort-pod4c7db054_059f_46a4_9fc7_ca1358ceaf57.slice - libcontainer container kubepods-besteffort-pod4c7db054_059f_46a4_9fc7_ca1358ceaf57.slice. May 17 00:24:34.890266 systemd[1]: Created slice kubepods-besteffort-pod3edbec67_a280_4b9a_b567_9942c66f18d0.slice - libcontainer container kubepods-besteffort-pod3edbec67_a280_4b9a_b567_9942c66f18d0.slice. May 17 00:24:34.893130 kubelet[3161]: I0517 00:24:34.891672 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e41e279e-d875-4866-b909-66b33f148bb6-config-volume\") pod \"coredns-668d6bf9bc-klpmb\" (UID: \"e41e279e-d875-4866-b909-66b33f148bb6\") " pod="kube-system/coredns-668d6bf9bc-klpmb" May 17 00:24:34.893130 kubelet[3161]: I0517 00:24:34.891710 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzrwv\" (UniqueName: \"kubernetes.io/projected/e41e279e-d875-4866-b909-66b33f148bb6-kube-api-access-jzrwv\") pod \"coredns-668d6bf9bc-klpmb\" (UID: \"e41e279e-d875-4866-b909-66b33f148bb6\") " pod="kube-system/coredns-668d6bf9bc-klpmb" May 17 00:24:34.918966 systemd[1]: Created slice kubepods-besteffort-pod891b95b4_9f23_4ca3_aa2b_1578acf454d2.slice - libcontainer container kubepods-besteffort-pod891b95b4_9f23_4ca3_aa2b_1578acf454d2.slice. May 17 00:24:34.953350 systemd[1]: Created slice kubepods-besteffort-podddf692d9_2f7b_48c5_85a5_b8c1de84fd75.slice - libcontainer container kubepods-besteffort-podddf692d9_2f7b_48c5_85a5_b8c1de84fd75.slice. May 17 00:24:34.971419 systemd[1]: Created slice kubepods-besteffort-podd0dadeac_75b4_4435_8bc5_fdac9115ed68.slice - libcontainer container kubepods-besteffort-podd0dadeac_75b4_4435_8bc5_fdac9115ed68.slice. May 17 00:24:34.992900 kubelet[3161]: I0517 00:24:34.992858 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6pbm\" (UniqueName: \"kubernetes.io/projected/4c7db054-059f-46a4-9fc7-ca1358ceaf57-kube-api-access-m6pbm\") pod \"calico-kube-controllers-58f54d8566-6bhlt\" (UID: \"4c7db054-059f-46a4-9fc7-ca1358ceaf57\") " pod="calico-system/calico-kube-controllers-58f54d8566-6bhlt" May 17 00:24:34.993552 kubelet[3161]: I0517 00:24:34.993224 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w7jf\" (UniqueName: \"kubernetes.io/projected/9cde766c-cf7a-4494-a1ab-ccbb03aa389f-kube-api-access-6w7jf\") pod \"coredns-668d6bf9bc-66bvn\" (UID: \"9cde766c-cf7a-4494-a1ab-ccbb03aa389f\") " pod="kube-system/coredns-668d6bf9bc-66bvn" May 17 00:24:34.993552 kubelet[3161]: I0517 00:24:34.993266 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ddf692d9-2f7b-48c5-85a5-b8c1de84fd75-calico-apiserver-certs\") pod \"calico-apiserver-8649d85dd-rkbxs\" (UID: \"ddf692d9-2f7b-48c5-85a5-b8c1de84fd75\") " pod="calico-apiserver/calico-apiserver-8649d85dd-rkbxs" May 17 00:24:34.993552 kubelet[3161]: I0517 00:24:34.993315 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0dadeac-75b4-4435-8bc5-fdac9115ed68-whisker-backend-key-pair\") pod \"whisker-56d5b74c78-d7rc9\" (UID: \"d0dadeac-75b4-4435-8bc5-fdac9115ed68\") " pod="calico-system/whisker-56d5b74c78-d7rc9" May 17 00:24:34.993552 kubelet[3161]: I0517 00:24:34.993339 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0dadeac-75b4-4435-8bc5-fdac9115ed68-whisker-ca-bundle\") pod \"whisker-56d5b74c78-d7rc9\" (UID: \"d0dadeac-75b4-4435-8bc5-fdac9115ed68\") " pod="calico-system/whisker-56d5b74c78-d7rc9" May 17 00:24:34.993552 kubelet[3161]: I0517 00:24:34.993361 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rrs9\" (UniqueName: \"kubernetes.io/projected/d0dadeac-75b4-4435-8bc5-fdac9115ed68-kube-api-access-4rrs9\") pod \"whisker-56d5b74c78-d7rc9\" (UID: \"d0dadeac-75b4-4435-8bc5-fdac9115ed68\") " pod="calico-system/whisker-56d5b74c78-d7rc9" May 17 00:24:34.994273 kubelet[3161]: I0517 00:24:34.993416 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3edbec67-a280-4b9a-b567-9942c66f18d0-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-w4ggj\" (UID: \"3edbec67-a280-4b9a-b567-9942c66f18d0\") " pod="calico-system/goldmane-78d55f7ddc-w4ggj" May 17 00:24:34.997809 kubelet[3161]: I0517 00:24:34.993444 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c7db054-059f-46a4-9fc7-ca1358ceaf57-tigera-ca-bundle\") pod \"calico-kube-controllers-58f54d8566-6bhlt\" (UID: \"4c7db054-059f-46a4-9fc7-ca1358ceaf57\") " pod="calico-system/calico-kube-controllers-58f54d8566-6bhlt" May 17 00:24:34.997809 kubelet[3161]: I0517 00:24:34.994671 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcgrz\" (UniqueName: \"kubernetes.io/projected/3edbec67-a280-4b9a-b567-9942c66f18d0-kube-api-access-mcgrz\") pod \"goldmane-78d55f7ddc-w4ggj\" (UID: \"3edbec67-a280-4b9a-b567-9942c66f18d0\") " pod="calico-system/goldmane-78d55f7ddc-w4ggj" May 17 00:24:34.997809 kubelet[3161]: I0517 00:24:34.994701 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3edbec67-a280-4b9a-b567-9942c66f18d0-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-w4ggj\" (UID: \"3edbec67-a280-4b9a-b567-9942c66f18d0\") " pod="calico-system/goldmane-78d55f7ddc-w4ggj" May 17 00:24:34.997809 kubelet[3161]: I0517 00:24:34.994736 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bfwv\" (UniqueName: \"kubernetes.io/projected/891b95b4-9f23-4ca3-aa2b-1578acf454d2-kube-api-access-5bfwv\") pod \"calico-apiserver-8649d85dd-zpwmv\" (UID: \"891b95b4-9f23-4ca3-aa2b-1578acf454d2\") " pod="calico-apiserver/calico-apiserver-8649d85dd-zpwmv" May 17 00:24:34.997809 kubelet[3161]: I0517 00:24:34.994767 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/891b95b4-9f23-4ca3-aa2b-1578acf454d2-calico-apiserver-certs\") pod \"calico-apiserver-8649d85dd-zpwmv\" (UID: \"891b95b4-9f23-4ca3-aa2b-1578acf454d2\") " pod="calico-apiserver/calico-apiserver-8649d85dd-zpwmv" May 17 00:24:34.998070 kubelet[3161]: I0517 00:24:34.994847 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6f9\" (UniqueName: \"kubernetes.io/projected/ddf692d9-2f7b-48c5-85a5-b8c1de84fd75-kube-api-access-xv6f9\") pod \"calico-apiserver-8649d85dd-rkbxs\" (UID: \"ddf692d9-2f7b-48c5-85a5-b8c1de84fd75\") " pod="calico-apiserver/calico-apiserver-8649d85dd-rkbxs" May 17 00:24:34.998070 kubelet[3161]: I0517 00:24:34.994907 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cde766c-cf7a-4494-a1ab-ccbb03aa389f-config-volume\") pod \"coredns-668d6bf9bc-66bvn\" (UID: \"9cde766c-cf7a-4494-a1ab-ccbb03aa389f\") " pod="kube-system/coredns-668d6bf9bc-66bvn" May 17 00:24:34.998070 kubelet[3161]: I0517 00:24:34.994936 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edbec67-a280-4b9a-b567-9942c66f18d0-config\") pod \"goldmane-78d55f7ddc-w4ggj\" (UID: \"3edbec67-a280-4b9a-b567-9942c66f18d0\") " pod="calico-system/goldmane-78d55f7ddc-w4ggj" May 17 00:24:35.009848 containerd[1976]: time="2025-05-17T00:24:35.007446233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:24:35.175579 containerd[1976]: time="2025-05-17T00:24:35.175524326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58f54d8566-6bhlt,Uid:4c7db054-059f-46a4-9fc7-ca1358ceaf57,Namespace:calico-system,Attempt:0,}" May 17 00:24:35.206550 containerd[1976]: time="2025-05-17T00:24:35.206185199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-w4ggj,Uid:3edbec67-a280-4b9a-b567-9942c66f18d0,Namespace:calico-system,Attempt:0,}" May 17 00:24:35.237496 containerd[1976]: time="2025-05-17T00:24:35.236585568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8649d85dd-zpwmv,Uid:891b95b4-9f23-4ca3-aa2b-1578acf454d2,Namespace:calico-apiserver,Attempt:0,}" May 17 00:24:35.268349 containerd[1976]: time="2025-05-17T00:24:35.268303984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8649d85dd-rkbxs,Uid:ddf692d9-2f7b-48c5-85a5-b8c1de84fd75,Namespace:calico-apiserver,Attempt:0,}" May 17 00:24:35.280903 containerd[1976]: time="2025-05-17T00:24:35.280861026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d5b74c78-d7rc9,Uid:d0dadeac-75b4-4435-8bc5-fdac9115ed68,Namespace:calico-system,Attempt:0,}" May 17 00:24:35.487250 containerd[1976]: time="2025-05-17T00:24:35.486880510Z" level=error msg="Failed to destroy network for sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.496037 containerd[1976]: time="2025-05-17T00:24:35.495979266Z" level=error msg="encountered an error cleaning up failed sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.496188 containerd[1976]: time="2025-05-17T00:24:35.496073881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7knxl,Uid:154c5300-472e-444e-8595-31315d3f4aee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.496694 containerd[1976]: time="2025-05-17T00:24:35.496651473Z" level=error msg="Failed to destroy network for sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.497667 containerd[1976]: time="2025-05-17T00:24:35.497625382Z" level=error msg="encountered an error cleaning up failed sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.497772 containerd[1976]: time="2025-05-17T00:24:35.497699274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58f54d8566-6bhlt,Uid:4c7db054-059f-46a4-9fc7-ca1358ceaf57,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.508708 containerd[1976]: time="2025-05-17T00:24:35.508649193Z" level=error msg="Failed to destroy network for sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.511619 containerd[1976]: time="2025-05-17T00:24:35.509012200Z" level=error msg="encountered an error cleaning up failed sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.511619 containerd[1976]: time="2025-05-17T00:24:35.509114098Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-w4ggj,Uid:3edbec67-a280-4b9a-b567-9942c66f18d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.515452 kubelet[3161]: E0517 00:24:35.510376 3161 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.515452 kubelet[3161]: E0517 00:24:35.510467 3161 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7knxl" May 17 00:24:35.515452 kubelet[3161]: E0517 00:24:35.510496 3161 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7knxl" May 17 00:24:35.515751 containerd[1976]: time="2025-05-17T00:24:35.512177985Z" level=error msg="Failed to destroy network for sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.515751 containerd[1976]: time="2025-05-17T00:24:35.513182924Z" level=error msg="encountered an error cleaning up failed sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.515751 containerd[1976]: time="2025-05-17T00:24:35.513238420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8649d85dd-rkbxs,Uid:ddf692d9-2f7b-48c5-85a5-b8c1de84fd75,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.515751 containerd[1976]: time="2025-05-17T00:24:35.515364396Z" level=error msg="Failed to destroy network for sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.515991 kubelet[3161]: E0517 00:24:35.510559 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7knxl_calico-system(154c5300-472e-444e-8595-31315d3f4aee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7knxl_calico-system(154c5300-472e-444e-8595-31315d3f4aee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7knxl" podUID="154c5300-472e-444e-8595-31315d3f4aee" May 17 00:24:35.515991 kubelet[3161]: E0517 00:24:35.511542 3161 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.515991 kubelet[3161]: E0517 00:24:35.511778 3161 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-w4ggj" May 17 00:24:35.516171 kubelet[3161]: E0517 00:24:35.511808 3161 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-w4ggj" May 17 00:24:35.516171 kubelet[3161]: E0517 00:24:35.511855 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-w4ggj_calico-system(3edbec67-a280-4b9a-b567-9942c66f18d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-w4ggj_calico-system(3edbec67-a280-4b9a-b567-9942c66f18d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:24:35.516171 kubelet[3161]: E0517 00:24:35.511910 3161 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.516340 kubelet[3161]: E0517 00:24:35.511933 3161 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58f54d8566-6bhlt" May 17 00:24:35.516340 kubelet[3161]: E0517 00:24:35.511975 3161 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58f54d8566-6bhlt" May 17 00:24:35.516340 kubelet[3161]: E0517 00:24:35.512009 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58f54d8566-6bhlt_calico-system(4c7db054-059f-46a4-9fc7-ca1358ceaf57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58f54d8566-6bhlt_calico-system(4c7db054-059f-46a4-9fc7-ca1358ceaf57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58f54d8566-6bhlt" podUID="4c7db054-059f-46a4-9fc7-ca1358ceaf57" May 17 00:24:35.516511 kubelet[3161]: E0517 00:24:35.514031 3161 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.516511 kubelet[3161]: E0517 00:24:35.514083 3161 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8649d85dd-rkbxs" May 17 00:24:35.516511 kubelet[3161]: E0517 00:24:35.514110 3161 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8649d85dd-rkbxs" May 17 00:24:35.517028 kubelet[3161]: E0517 00:24:35.514155 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8649d85dd-rkbxs_calico-apiserver(ddf692d9-2f7b-48c5-85a5-b8c1de84fd75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8649d85dd-rkbxs_calico-apiserver(ddf692d9-2f7b-48c5-85a5-b8c1de84fd75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8649d85dd-rkbxs" podUID="ddf692d9-2f7b-48c5-85a5-b8c1de84fd75" May 17 00:24:35.517328 containerd[1976]: time="2025-05-17T00:24:35.517291766Z" level=error msg="encountered an error cleaning up failed sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.517483 containerd[1976]: time="2025-05-17T00:24:35.517447090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8649d85dd-zpwmv,Uid:891b95b4-9f23-4ca3-aa2b-1578acf454d2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.519816 kubelet[3161]: E0517 00:24:35.519635 3161 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.519816 kubelet[3161]: E0517 00:24:35.519686 3161 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8649d85dd-zpwmv" May 17 00:24:35.519816 kubelet[3161]: E0517 00:24:35.519710 3161 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8649d85dd-zpwmv" May 17 00:24:35.523050 kubelet[3161]: E0517 00:24:35.519759 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8649d85dd-zpwmv_calico-apiserver(891b95b4-9f23-4ca3-aa2b-1578acf454d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8649d85dd-zpwmv_calico-apiserver(891b95b4-9f23-4ca3-aa2b-1578acf454d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8649d85dd-zpwmv" podUID="891b95b4-9f23-4ca3-aa2b-1578acf454d2" May 17 00:24:35.531740 containerd[1976]: time="2025-05-17T00:24:35.531696302Z" level=error msg="Failed to destroy network for sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.532127 containerd[1976]: time="2025-05-17T00:24:35.532090627Z" level=error msg="encountered an error cleaning up failed sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.532232 containerd[1976]: time="2025-05-17T00:24:35.532162037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d5b74c78-d7rc9,Uid:d0dadeac-75b4-4435-8bc5-fdac9115ed68,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.532415 kubelet[3161]: E0517 00:24:35.532359 3161 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:35.532574 kubelet[3161]: E0517 00:24:35.532413 3161 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56d5b74c78-d7rc9" May 17 00:24:35.532574 kubelet[3161]: E0517 00:24:35.532443 3161 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56d5b74c78-d7rc9" May 17 00:24:35.532574 kubelet[3161]: E0517 00:24:35.532488 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56d5b74c78-d7rc9_calico-system(d0dadeac-75b4-4435-8bc5-fdac9115ed68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56d5b74c78-d7rc9_calico-system(d0dadeac-75b4-4435-8bc5-fdac9115ed68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56d5b74c78-d7rc9" podUID="d0dadeac-75b4-4435-8bc5-fdac9115ed68" May 17 00:24:35.741617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1-shm.mount: Deactivated successfully. May 17 00:24:35.998820 kubelet[3161]: I0517 00:24:35.998392 3161 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:24:35.998820 kubelet[3161]: E0517 00:24:35.998746 3161 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 17 00:24:36.015724 kubelet[3161]: I0517 00:24:36.015282 3161 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:24:36.018721 kubelet[3161]: E0517 00:24:36.018673 3161 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e41e279e-d875-4866-b909-66b33f148bb6-config-volume podName:e41e279e-d875-4866-b909-66b33f148bb6 nodeName:}" failed. No retries permitted until 2025-05-17 00:24:36.498785855 +0000 UTC m=+30.841246500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e41e279e-d875-4866-b909-66b33f148bb6-config-volume") pod "coredns-668d6bf9bc-klpmb" (UID: "e41e279e-d875-4866-b909-66b33f148bb6") : failed to sync configmap cache: timed out waiting for the condition May 17 00:24:36.021705 containerd[1976]: time="2025-05-17T00:24:36.021401928Z" level=info msg="StopPodSandbox for \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\"" May 17 00:24:36.022825 containerd[1976]: time="2025-05-17T00:24:36.022072132Z" level=info msg="StopPodSandbox for \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\"" May 17 00:24:36.022951 containerd[1976]: time="2025-05-17T00:24:36.022923225Z" level=info msg="Ensure that sandbox e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b in task-service has been cleanup successfully" May 17 00:24:36.023314 containerd[1976]: time="2025-05-17T00:24:36.022925512Z" level=info msg="Ensure that sandbox 828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84 in task-service has been cleanup successfully" May 17 00:24:36.025780 kubelet[3161]: I0517 00:24:36.025749 3161 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:24:36.026469 containerd[1976]: time="2025-05-17T00:24:36.026349419Z" level=info msg="StopPodSandbox for \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\"" May 17 00:24:36.027467 containerd[1976]: time="2025-05-17T00:24:36.027444445Z" level=info msg="Ensure that sandbox d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1 in task-service has been cleanup successfully" May 17 00:24:36.027876 kubelet[3161]: I0517 00:24:36.027776 3161 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:24:36.029007 containerd[1976]: time="2025-05-17T00:24:36.028972099Z" level=info msg="StopPodSandbox for \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\"" May 17 00:24:36.031151 containerd[1976]: time="2025-05-17T00:24:36.031030698Z" level=info msg="Ensure that sandbox 49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000 in task-service has been cleanup successfully" May 17 00:24:36.038266 kubelet[3161]: I0517 00:24:36.038222 3161 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:24:36.043284 containerd[1976]: time="2025-05-17T00:24:36.043174910Z" level=info msg="StopPodSandbox for \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\"" May 17 00:24:36.043498 containerd[1976]: time="2025-05-17T00:24:36.043339691Z" level=info msg="Ensure that sandbox 5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8 in task-service has been cleanup successfully" May 17 00:24:36.048727 kubelet[3161]: I0517 00:24:36.048701 3161 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:24:36.049146 containerd[1976]: time="2025-05-17T00:24:36.049118457Z" level=info msg="StopPodSandbox for \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\"" May 17 00:24:36.050872 containerd[1976]: time="2025-05-17T00:24:36.049717814Z" level=info msg="Ensure that sandbox 9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56 in task-service has been cleanup successfully" May 17 00:24:36.090109 containerd[1976]: time="2025-05-17T00:24:36.090053145Z" level=error msg="StopPodSandbox for \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\" failed" error="failed to destroy network for sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.090765 kubelet[3161]: E0517 00:24:36.090651 3161 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:24:36.099991 kubelet[3161]: E0517 00:24:36.099955 3161 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 17 00:24:36.100553 kubelet[3161]: E0517 00:24:36.100031 3161 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9cde766c-cf7a-4494-a1ab-ccbb03aa389f-config-volume podName:9cde766c-cf7a-4494-a1ab-ccbb03aa389f nodeName:}" failed. No retries permitted until 2025-05-17 00:24:36.60001334 +0000 UTC m=+30.942473984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9cde766c-cf7a-4494-a1ab-ccbb03aa389f-config-volume") pod "coredns-668d6bf9bc-66bvn" (UID: "9cde766c-cf7a-4494-a1ab-ccbb03aa389f") : failed to sync configmap cache: timed out waiting for the condition May 17 00:24:36.103370 kubelet[3161]: E0517 00:24:36.092617 3161 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b"} May 17 00:24:36.103459 kubelet[3161]: E0517 00:24:36.103395 3161 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ddf692d9-2f7b-48c5-85a5-b8c1de84fd75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:36.103459 kubelet[3161]: E0517 00:24:36.103420 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ddf692d9-2f7b-48c5-85a5-b8c1de84fd75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8649d85dd-rkbxs" podUID="ddf692d9-2f7b-48c5-85a5-b8c1de84fd75" May 17 00:24:36.136976 containerd[1976]: time="2025-05-17T00:24:36.136931009Z" level=error msg="StopPodSandbox for \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\" failed" error="failed to destroy network for sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.137259 kubelet[3161]: E0517 00:24:36.137225 3161 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:24:36.137314 kubelet[3161]: E0517 00:24:36.137274 3161 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1"} May 17 00:24:36.137341 kubelet[3161]: E0517 00:24:36.137310 3161 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"154c5300-472e-444e-8595-31315d3f4aee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:36.137410 kubelet[3161]: E0517 00:24:36.137329 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"154c5300-472e-444e-8595-31315d3f4aee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7knxl" podUID="154c5300-472e-444e-8595-31315d3f4aee" May 17 00:24:36.139334 containerd[1976]: time="2025-05-17T00:24:36.139120163Z" level=error msg="StopPodSandbox for \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\" failed" error="failed to destroy network for sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.139525 kubelet[3161]: E0517 00:24:36.139313 3161 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:24:36.139525 kubelet[3161]: E0517 00:24:36.139460 3161 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56"} May 17 00:24:36.139800 kubelet[3161]: E0517 00:24:36.139657 3161 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c7db054-059f-46a4-9fc7-ca1358ceaf57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:36.139800 kubelet[3161]: E0517 00:24:36.139687 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c7db054-059f-46a4-9fc7-ca1358ceaf57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58f54d8566-6bhlt" podUID="4c7db054-059f-46a4-9fc7-ca1358ceaf57" May 17 00:24:36.141614 containerd[1976]: time="2025-05-17T00:24:36.141482658Z" level=error msg="StopPodSandbox for \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\" failed" error="failed to destroy network for sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.142464 kubelet[3161]: E0517 00:24:36.141911 3161 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:24:36.142464 kubelet[3161]: E0517 00:24:36.141964 3161 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000"} May 17 00:24:36.142971 kubelet[3161]: E0517 00:24:36.142715 3161 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"891b95b4-9f23-4ca3-aa2b-1578acf454d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:36.142971 kubelet[3161]: E0517 00:24:36.142894 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"891b95b4-9f23-4ca3-aa2b-1578acf454d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8649d85dd-zpwmv" podUID="891b95b4-9f23-4ca3-aa2b-1578acf454d2" May 17 00:24:36.143258 containerd[1976]: time="2025-05-17T00:24:36.142037520Z" level=error msg="StopPodSandbox for \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\" failed" error="failed to destroy network for sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.143572 kubelet[3161]: E0517 00:24:36.143437 3161 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:24:36.143572 kubelet[3161]: E0517 00:24:36.143465 3161 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84"} May 17 00:24:36.143572 kubelet[3161]: E0517 00:24:36.143488 3161 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0dadeac-75b4-4435-8bc5-fdac9115ed68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:36.143766 kubelet[3161]: E0517 00:24:36.143508 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0dadeac-75b4-4435-8bc5-fdac9115ed68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56d5b74c78-d7rc9" podUID="d0dadeac-75b4-4435-8bc5-fdac9115ed68" May 17 00:24:36.147098 containerd[1976]: time="2025-05-17T00:24:36.147059591Z" level=error msg="StopPodSandbox for \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\" failed" error="failed to destroy network for sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.147873 kubelet[3161]: E0517 00:24:36.147178 3161 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:24:36.147873 kubelet[3161]: E0517 00:24:36.147206 3161 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8"} May 17 00:24:36.147873 kubelet[3161]: E0517 00:24:36.147235 3161 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3edbec67-a280-4b9a-b567-9942c66f18d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:36.147873 kubelet[3161]: E0517 00:24:36.147254 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3edbec67-a280-4b9a-b567-9942c66f18d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:24:36.618999 containerd[1976]: time="2025-05-17T00:24:36.618940333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-klpmb,Uid:e41e279e-d875-4866-b909-66b33f148bb6,Namespace:kube-system,Attempt:0,}" May 17 00:24:36.633238 containerd[1976]: time="2025-05-17T00:24:36.633198180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-66bvn,Uid:9cde766c-cf7a-4494-a1ab-ccbb03aa389f,Namespace:kube-system,Attempt:0,}" May 17 00:24:36.735163 containerd[1976]: time="2025-05-17T00:24:36.735120727Z" level=error msg="Failed to destroy network for sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.736915 containerd[1976]: time="2025-05-17T00:24:36.736767676Z" level=error msg="encountered an error cleaning up failed sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.736915 containerd[1976]: time="2025-05-17T00:24:36.736821000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-66bvn,Uid:9cde766c-cf7a-4494-a1ab-ccbb03aa389f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.738983 kubelet[3161]: E0517 00:24:36.737675 3161 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.738983 kubelet[3161]: E0517 00:24:36.737726 3161 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-66bvn" May 17 00:24:36.738983 kubelet[3161]: E0517 00:24:36.737745 3161 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-66bvn" May 17 00:24:36.743227 kubelet[3161]: E0517 00:24:36.737787 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-66bvn_kube-system(9cde766c-cf7a-4494-a1ab-ccbb03aa389f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-66bvn_kube-system(9cde766c-cf7a-4494-a1ab-ccbb03aa389f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-66bvn" podUID="9cde766c-cf7a-4494-a1ab-ccbb03aa389f" May 17 00:24:36.739196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1-shm.mount: Deactivated successfully. May 17 00:24:36.750004 containerd[1976]: time="2025-05-17T00:24:36.749956773Z" level=error msg="Failed to destroy network for sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.750297 containerd[1976]: time="2025-05-17T00:24:36.750256698Z" level=error msg="encountered an error cleaning up failed sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.750362 containerd[1976]: time="2025-05-17T00:24:36.750318250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-klpmb,Uid:e41e279e-d875-4866-b909-66b33f148bb6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.750631 kubelet[3161]: E0517 00:24:36.750573 3161 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:36.750712 kubelet[3161]: E0517 00:24:36.750629 3161 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-klpmb" May 17 00:24:36.750712 kubelet[3161]: E0517 00:24:36.750649 3161 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-klpmb" May 17 00:24:36.750712 kubelet[3161]: E0517 00:24:36.750685 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-klpmb_kube-system(e41e279e-d875-4866-b909-66b33f148bb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-klpmb_kube-system(e41e279e-d875-4866-b909-66b33f148bb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-klpmb" podUID="e41e279e-d875-4866-b909-66b33f148bb6" May 17 00:24:36.754065 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73-shm.mount: Deactivated successfully. May 17 00:24:37.051508 kubelet[3161]: I0517 00:24:37.051473 3161 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:24:37.053246 containerd[1976]: time="2025-05-17T00:24:37.052284160Z" level=info msg="StopPodSandbox for \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\"" May 17 00:24:37.053246 containerd[1976]: time="2025-05-17T00:24:37.052481177Z" level=info msg="Ensure that sandbox 624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73 in task-service has been cleanup successfully" May 17 00:24:37.055456 kubelet[3161]: I0517 00:24:37.054168 3161 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:24:37.055935 containerd[1976]: time="2025-05-17T00:24:37.054853452Z" level=info msg="StopPodSandbox for \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\"" May 17 00:24:37.056273 containerd[1976]: time="2025-05-17T00:24:37.056178022Z" level=info msg="Ensure that sandbox 1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1 in task-service has been cleanup successfully" May 17 00:24:37.125556 containerd[1976]: time="2025-05-17T00:24:37.125494115Z" level=error msg="StopPodSandbox for \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\" failed" error="failed to destroy network for sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:37.126245 kubelet[3161]: E0517 00:24:37.125858 3161 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:24:37.126245 kubelet[3161]: E0517 00:24:37.125910 3161 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73"} May 17 00:24:37.126245 kubelet[3161]: E0517 00:24:37.125944 3161 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e41e279e-d875-4866-b909-66b33f148bb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:37.126245 kubelet[3161]: E0517 00:24:37.125964 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e41e279e-d875-4866-b909-66b33f148bb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-klpmb" podUID="e41e279e-d875-4866-b909-66b33f148bb6" May 17 00:24:37.137641 containerd[1976]: time="2025-05-17T00:24:37.137590769Z" level=error msg="StopPodSandbox for \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\" failed" error="failed to destroy network for sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:37.137839 kubelet[3161]: E0517 00:24:37.137792 3161 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:24:37.137911 kubelet[3161]: E0517 00:24:37.137839 3161 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1"} May 17 00:24:37.137911 kubelet[3161]: E0517 00:24:37.137869 3161 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9cde766c-cf7a-4494-a1ab-ccbb03aa389f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:37.137911 kubelet[3161]: E0517 00:24:37.137890 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9cde766c-cf7a-4494-a1ab-ccbb03aa389f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-66bvn" podUID="9cde766c-cf7a-4494-a1ab-ccbb03aa389f" May 17 00:24:43.460453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70831911.mount: Deactivated successfully. May 17 00:24:43.526127 containerd[1976]: time="2025-05-17T00:24:43.524972599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:24:43.531240 containerd[1976]: time="2025-05-17T00:24:43.530798684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 8.520099112s" May 17 00:24:43.531240 containerd[1976]: time="2025-05-17T00:24:43.530866388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:24:43.558473 containerd[1976]: time="2025-05-17T00:24:43.558421310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:43.600152 containerd[1976]: time="2025-05-17T00:24:43.600115908Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:43.600843 containerd[1976]: time="2025-05-17T00:24:43.600778907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:43.619358 containerd[1976]: time="2025-05-17T00:24:43.619313671Z" level=info msg="CreateContainer within sandbox \"f5c2deca964e0a929a8da0fe246c06f2e618bada5ed363973d3bb5aa7eb90bb5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:24:43.668212 containerd[1976]: time="2025-05-17T00:24:43.668160941Z" level=info msg="CreateContainer within sandbox \"f5c2deca964e0a929a8da0fe246c06f2e618bada5ed363973d3bb5aa7eb90bb5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bd5d060defa27c2630bbaaa2289ae701dbaf4206f3c1dc46611636c512d1a57e\"" May 17 00:24:43.678614 containerd[1976]: time="2025-05-17T00:24:43.678576654Z" level=info msg="StartContainer for \"bd5d060defa27c2630bbaaa2289ae701dbaf4206f3c1dc46611636c512d1a57e\"" May 17 00:24:43.812329 systemd[1]: Started cri-containerd-bd5d060defa27c2630bbaaa2289ae701dbaf4206f3c1dc46611636c512d1a57e.scope - libcontainer container bd5d060defa27c2630bbaaa2289ae701dbaf4206f3c1dc46611636c512d1a57e. May 17 00:24:43.861570 containerd[1976]: time="2025-05-17T00:24:43.861504420Z" level=info msg="StartContainer for \"bd5d060defa27c2630bbaaa2289ae701dbaf4206f3c1dc46611636c512d1a57e\" returns successfully" May 17 00:24:44.122884 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:24:44.124086 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:24:44.170888 kubelet[3161]: I0517 00:24:44.154628 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g7fz7" podStartSLOduration=1.575478 podStartE2EDuration="20.126749686s" podCreationTimestamp="2025-05-17 00:24:24 +0000 UTC" firstStartedPulling="2025-05-17 00:24:24.980490613 +0000 UTC m=+19.322951256" lastFinishedPulling="2025-05-17 00:24:43.531762312 +0000 UTC m=+37.874222942" observedRunningTime="2025-05-17 00:24:44.125428973 +0000 UTC m=+38.467889626" watchObservedRunningTime="2025-05-17 00:24:44.126749686 +0000 UTC m=+38.469210395" May 17 00:24:44.433857 containerd[1976]: time="2025-05-17T00:24:44.433034265Z" level=info msg="StopPodSandbox for \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\"" May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.554 [INFO][4498] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.556 [INFO][4498] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" iface="eth0" netns="/var/run/netns/cni-0af3d98f-e815-958f-4c41-b89f034e706d" May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.557 [INFO][4498] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" iface="eth0" netns="/var/run/netns/cni-0af3d98f-e815-958f-4c41-b89f034e706d" May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.559 [INFO][4498] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" iface="eth0" netns="/var/run/netns/cni-0af3d98f-e815-958f-4c41-b89f034e706d" May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.559 [INFO][4498] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.559 [INFO][4498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.930 [INFO][4506] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" HandleID="k8s-pod-network.828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" Workload="ip--172--31--18--208-k8s-whisker--56d5b74c78--d7rc9-eth0" May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.935 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.937 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.956 [WARNING][4506] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" HandleID="k8s-pod-network.828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" Workload="ip--172--31--18--208-k8s-whisker--56d5b74c78--d7rc9-eth0" May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.956 [INFO][4506] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" HandleID="k8s-pod-network.828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" Workload="ip--172--31--18--208-k8s-whisker--56d5b74c78--d7rc9-eth0" May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.957 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:44.961918 containerd[1976]: 2025-05-17 00:24:44.959 [INFO][4498] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:24:44.962826 containerd[1976]: time="2025-05-17T00:24:44.962019340Z" level=info msg="TearDown network for sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\" successfully" May 17 00:24:44.962826 containerd[1976]: time="2025-05-17T00:24:44.962041163Z" level=info msg="StopPodSandbox for \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\" returns successfully" May 17 00:24:44.966093 systemd[1]: run-netns-cni\x2d0af3d98f\x2de815\x2d958f\x2d4c41\x2db89f034e706d.mount: Deactivated successfully. May 17 00:24:45.026486 kubelet[3161]: I0517 00:24:45.026440 3161 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0dadeac-75b4-4435-8bc5-fdac9115ed68-whisker-ca-bundle\") pod \"d0dadeac-75b4-4435-8bc5-fdac9115ed68\" (UID: \"d0dadeac-75b4-4435-8bc5-fdac9115ed68\") " May 17 00:24:45.026667 kubelet[3161]: I0517 00:24:45.026543 3161 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rrs9\" (UniqueName: \"kubernetes.io/projected/d0dadeac-75b4-4435-8bc5-fdac9115ed68-kube-api-access-4rrs9\") pod \"d0dadeac-75b4-4435-8bc5-fdac9115ed68\" (UID: \"d0dadeac-75b4-4435-8bc5-fdac9115ed68\") " May 17 00:24:45.026667 kubelet[3161]: I0517 00:24:45.026566 3161 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0dadeac-75b4-4435-8bc5-fdac9115ed68-whisker-backend-key-pair\") pod \"d0dadeac-75b4-4435-8bc5-fdac9115ed68\" (UID: \"d0dadeac-75b4-4435-8bc5-fdac9115ed68\") " May 17 00:24:45.040410 systemd[1]: var-lib-kubelet-pods-d0dadeac\x2d75b4\x2d4435\x2d8bc5\x2dfdac9115ed68-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4rrs9.mount: Deactivated successfully. May 17 00:24:45.040517 systemd[1]: var-lib-kubelet-pods-d0dadeac\x2d75b4\x2d4435\x2d8bc5\x2dfdac9115ed68-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:24:45.044042 kubelet[3161]: I0517 00:24:45.042554 3161 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0dadeac-75b4-4435-8bc5-fdac9115ed68-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d0dadeac-75b4-4435-8bc5-fdac9115ed68" (UID: "d0dadeac-75b4-4435-8bc5-fdac9115ed68"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:24:45.044256 kubelet[3161]: I0517 00:24:45.042608 3161 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0dadeac-75b4-4435-8bc5-fdac9115ed68-kube-api-access-4rrs9" (OuterVolumeSpecName: "kube-api-access-4rrs9") pod "d0dadeac-75b4-4435-8bc5-fdac9115ed68" (UID: "d0dadeac-75b4-4435-8bc5-fdac9115ed68"). InnerVolumeSpecName "kube-api-access-4rrs9". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:24:45.044256 kubelet[3161]: I0517 00:24:45.044247 3161 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0dadeac-75b4-4435-8bc5-fdac9115ed68-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d0dadeac-75b4-4435-8bc5-fdac9115ed68" (UID: "d0dadeac-75b4-4435-8bc5-fdac9115ed68"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:24:45.108776 systemd[1]: Removed slice kubepods-besteffort-podd0dadeac_75b4_4435_8bc5_fdac9115ed68.slice - libcontainer container kubepods-besteffort-podd0dadeac_75b4_4435_8bc5_fdac9115ed68.slice. May 17 00:24:45.154009 systemd[1]: run-containerd-runc-k8s.io-bd5d060defa27c2630bbaaa2289ae701dbaf4206f3c1dc46611636c512d1a57e-runc.BEZwaS.mount: Deactivated successfully. May 17 00:24:45.155907 kubelet[3161]: I0517 00:24:45.154458 3161 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4rrs9\" (UniqueName: \"kubernetes.io/projected/d0dadeac-75b4-4435-8bc5-fdac9115ed68-kube-api-access-4rrs9\") on node \"ip-172-31-18-208\" DevicePath \"\"" May 17 00:24:45.155907 kubelet[3161]: I0517 00:24:45.154509 3161 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0dadeac-75b4-4435-8bc5-fdac9115ed68-whisker-backend-key-pair\") on node \"ip-172-31-18-208\" DevicePath \"\"" May 17 00:24:45.155907 kubelet[3161]: I0517 00:24:45.154526 3161 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0dadeac-75b4-4435-8bc5-fdac9115ed68-whisker-ca-bundle\") on node \"ip-172-31-18-208\" DevicePath \"\"" May 17 00:24:45.314952 systemd[1]: Created slice kubepods-besteffort-pod9e29c649_bade_4daa_bb31_67432210eca8.slice - libcontainer container kubepods-besteffort-pod9e29c649_bade_4daa_bb31_67432210eca8.slice. May 17 00:24:45.457087 kubelet[3161]: I0517 00:24:45.457045 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4jmq\" (UniqueName: \"kubernetes.io/projected/9e29c649-bade-4daa-bb31-67432210eca8-kube-api-access-v4jmq\") pod \"whisker-d96dfd79b-fl892\" (UID: \"9e29c649-bade-4daa-bb31-67432210eca8\") " pod="calico-system/whisker-d96dfd79b-fl892" May 17 00:24:45.457087 kubelet[3161]: I0517 00:24:45.457091 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9e29c649-bade-4daa-bb31-67432210eca8-whisker-backend-key-pair\") pod \"whisker-d96dfd79b-fl892\" (UID: \"9e29c649-bade-4daa-bb31-67432210eca8\") " pod="calico-system/whisker-d96dfd79b-fl892" May 17 00:24:45.457507 kubelet[3161]: I0517 00:24:45.457115 3161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e29c649-bade-4daa-bb31-67432210eca8-whisker-ca-bundle\") pod \"whisker-d96dfd79b-fl892\" (UID: \"9e29c649-bade-4daa-bb31-67432210eca8\") " pod="calico-system/whisker-d96dfd79b-fl892" May 17 00:24:45.619052 containerd[1976]: time="2025-05-17T00:24:45.618604497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d96dfd79b-fl892,Uid:9e29c649-bade-4daa-bb31-67432210eca8,Namespace:calico-system,Attempt:0,}" May 17 00:24:45.795743 kubelet[3161]: I0517 00:24:45.795686 3161 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0dadeac-75b4-4435-8bc5-fdac9115ed68" path="/var/lib/kubelet/pods/d0dadeac-75b4-4435-8bc5-fdac9115ed68/volumes" May 17 00:24:45.836348 (udev-worker)[4455]: Network interface NamePolicy= disabled on kernel command line. May 17 00:24:45.836740 systemd-networkd[1837]: cali059bf3e4366: Link UP May 17 00:24:45.836888 systemd-networkd[1837]: cali059bf3e4366: Gained carrier May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.671 [INFO][4548] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.683 [INFO][4548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0 whisker-d96dfd79b- calico-system 9e29c649-bade-4daa-bb31-67432210eca8 873 0 2025-05-17 00:24:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d96dfd79b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-208 whisker-d96dfd79b-fl892 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali059bf3e4366 [] [] }} ContainerID="01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" Namespace="calico-system" Pod="whisker-d96dfd79b-fl892" WorkloadEndpoint="ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.683 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" Namespace="calico-system" Pod="whisker-d96dfd79b-fl892" WorkloadEndpoint="ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.724 [INFO][4561] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" HandleID="k8s-pod-network.01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" Workload="ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.725 [INFO][4561] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" HandleID="k8s-pod-network.01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" Workload="ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-208", "pod":"whisker-d96dfd79b-fl892", "timestamp":"2025-05-17 00:24:45.724317934 +0000 UTC"}, Hostname:"ip-172-31-18-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.725 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.725 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.725 [INFO][4561] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-208' May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.736 [INFO][4561] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" host="ip-172-31-18-208" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.759 [INFO][4561] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-208" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.765 [INFO][4561] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.767 [INFO][4561] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.770 [INFO][4561] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.770 [INFO][4561] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" host="ip-172-31-18-208" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.773 [INFO][4561] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8 May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.788 [INFO][4561] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" host="ip-172-31-18-208" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.811 [INFO][4561] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.129/26] block=192.168.106.128/26 handle="k8s-pod-network.01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" host="ip-172-31-18-208" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.812 [INFO][4561] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.129/26] handle="k8s-pod-network.01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" host="ip-172-31-18-208" May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.812 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:45.865811 containerd[1976]: 2025-05-17 00:24:45.812 [INFO][4561] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.129/26] IPv6=[] ContainerID="01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" HandleID="k8s-pod-network.01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" Workload="ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0" May 17 00:24:45.867036 containerd[1976]: 2025-05-17 00:24:45.819 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" Namespace="calico-system" Pod="whisker-d96dfd79b-fl892" WorkloadEndpoint="ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0", GenerateName:"whisker-d96dfd79b-", Namespace:"calico-system", SelfLink:"", UID:"9e29c649-bade-4daa-bb31-67432210eca8", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d96dfd79b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"", Pod:"whisker-d96dfd79b-fl892", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali059bf3e4366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:45.867036 containerd[1976]: 2025-05-17 00:24:45.819 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.129/32] ContainerID="01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" Namespace="calico-system" Pod="whisker-d96dfd79b-fl892" WorkloadEndpoint="ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0" May 17 00:24:45.867036 containerd[1976]: 2025-05-17 00:24:45.819 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali059bf3e4366 ContainerID="01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" Namespace="calico-system" Pod="whisker-d96dfd79b-fl892" WorkloadEndpoint="ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0" May 17 00:24:45.867036 containerd[1976]: 2025-05-17 00:24:45.835 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" Namespace="calico-system" Pod="whisker-d96dfd79b-fl892" WorkloadEndpoint="ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0" May 17 00:24:45.867036 containerd[1976]: 2025-05-17 00:24:45.837 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" Namespace="calico-system" Pod="whisker-d96dfd79b-fl892" WorkloadEndpoint="ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0", GenerateName:"whisker-d96dfd79b-", Namespace:"calico-system", SelfLink:"", UID:"9e29c649-bade-4daa-bb31-67432210eca8", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d96dfd79b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8", Pod:"whisker-d96dfd79b-fl892", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali059bf3e4366", MAC:"52:81:34:e2:c0:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:45.867036 containerd[1976]: 2025-05-17 00:24:45.862 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8" Namespace="calico-system" Pod="whisker-d96dfd79b-fl892" WorkloadEndpoint="ip--172--31--18--208-k8s-whisker--d96dfd79b--fl892-eth0" May 17 00:24:45.974226 containerd[1976]: time="2025-05-17T00:24:45.965522659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:45.974226 containerd[1976]: time="2025-05-17T00:24:45.969314198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:45.974226 containerd[1976]: time="2025-05-17T00:24:45.970077120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:45.974226 containerd[1976]: time="2025-05-17T00:24:45.971628001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:46.048705 systemd[1]: Started cri-containerd-01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8.scope - libcontainer container 01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8. May 17 00:24:46.162460 containerd[1976]: time="2025-05-17T00:24:46.162416422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d96dfd79b-fl892,Uid:9e29c649-bade-4daa-bb31-67432210eca8,Namespace:calico-system,Attempt:0,} returns sandbox id \"01287818b793e1544e4261720bf4ed60a6b3afb4f9666f0fbab057a7a8fc05b8\"" May 17 00:24:46.173416 containerd[1976]: time="2025-05-17T00:24:46.173384611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:24:46.407449 containerd[1976]: time="2025-05-17T00:24:46.407375954Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:46.409634 containerd[1976]: time="2025-05-17T00:24:46.409523503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:46.409634 containerd[1976]: time="2025-05-17T00:24:46.409583936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:24:46.409856 kubelet[3161]: E0517 00:24:46.409784 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:46.411310 kubelet[3161]: E0517 00:24:46.411265 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:46.420449 kubelet[3161]: E0517 00:24:46.420371 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b29552b59a2b4980bc180c562b9beff2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4jmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d96dfd79b-fl892_calico-system(9e29c649-bade-4daa-bb31-67432210eca8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:46.422489 containerd[1976]: time="2025-05-17T00:24:46.422456223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:24:46.589517 containerd[1976]: time="2025-05-17T00:24:46.589437706Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:46.591441 containerd[1976]: time="2025-05-17T00:24:46.591363183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:46.591589 containerd[1976]: time="2025-05-17T00:24:46.591453958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:24:46.591669 kubelet[3161]: E0517 00:24:46.591626 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:46.592206 kubelet[3161]: E0517 00:24:46.591673 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:46.592315 kubelet[3161]: E0517 00:24:46.591785 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4jmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d96dfd79b-fl892_calico-system(9e29c649-bade-4daa-bb31-67432210eca8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:46.594056 kubelet[3161]: E0517 00:24:46.594018 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-d96dfd79b-fl892" podUID="9e29c649-bade-4daa-bb31-67432210eca8" May 17 00:24:46.776168 containerd[1976]: time="2025-05-17T00:24:46.776138094Z" level=info msg="StopPodSandbox for \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\"" May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.817 [INFO][4741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.818 [INFO][4741] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" iface="eth0" netns="/var/run/netns/cni-530017a5-c6c6-9bd3-8086-df669b3bc95b" May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.822 [INFO][4741] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" iface="eth0" netns="/var/run/netns/cni-530017a5-c6c6-9bd3-8086-df669b3bc95b" May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.823 [INFO][4741] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" iface="eth0" netns="/var/run/netns/cni-530017a5-c6c6-9bd3-8086-df669b3bc95b" May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.823 [INFO][4741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.823 [INFO][4741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.849 [INFO][4749] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" HandleID="k8s-pod-network.5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.850 [INFO][4749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.850 [INFO][4749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.855 [WARNING][4749] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" HandleID="k8s-pod-network.5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.855 [INFO][4749] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" HandleID="k8s-pod-network.5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.857 [INFO][4749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:46.861525 containerd[1976]: 2025-05-17 00:24:46.859 [INFO][4741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:24:46.863748 containerd[1976]: time="2025-05-17T00:24:46.863681179Z" level=info msg="TearDown network for sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\" successfully" May 17 00:24:46.863748 containerd[1976]: time="2025-05-17T00:24:46.863720706Z" level=info msg="StopPodSandbox for \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\" returns successfully" May 17 00:24:46.864201 systemd[1]: run-netns-cni\x2d530017a5\x2dc6c6\x2d9bd3\x2d8086\x2ddf669b3bc95b.mount: Deactivated successfully. May 17 00:24:46.864795 containerd[1976]: time="2025-05-17T00:24:46.864772174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-w4ggj,Uid:3edbec67-a280-4b9a-b567-9942c66f18d0,Namespace:calico-system,Attempt:1,}" May 17 00:24:46.989666 systemd-networkd[1837]: cali6ea7b627f8b: Link UP May 17 00:24:46.991632 systemd-networkd[1837]: cali6ea7b627f8b: Gained carrier May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.914 [INFO][4756] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.925 [INFO][4756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0 goldmane-78d55f7ddc- calico-system 3edbec67-a280-4b9a-b567-9942c66f18d0 887 0 2025-05-17 00:24:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-208 goldmane-78d55f7ddc-w4ggj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6ea7b627f8b [] [] }} ContainerID="7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-w4ggj" WorkloadEndpoint="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.925 [INFO][4756] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-w4ggj" WorkloadEndpoint="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.949 [INFO][4768] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" HandleID="k8s-pod-network.7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.949 [INFO][4768] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" HandleID="k8s-pod-network.7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-208", "pod":"goldmane-78d55f7ddc-w4ggj", "timestamp":"2025-05-17 00:24:46.949582724 +0000 UTC"}, Hostname:"ip-172-31-18-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.949 [INFO][4768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.949 [INFO][4768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.949 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-208' May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.956 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" host="ip-172-31-18-208" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.961 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-208" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.968 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.970 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.972 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.972 [INFO][4768] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" host="ip-172-31-18-208" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.973 [INFO][4768] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.979 [INFO][4768] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" host="ip-172-31-18-208" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.984 [INFO][4768] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.130/26] block=192.168.106.128/26 handle="k8s-pod-network.7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" host="ip-172-31-18-208" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.984 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.130/26] handle="k8s-pod-network.7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" host="ip-172-31-18-208" May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.984 [INFO][4768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:47.010410 containerd[1976]: 2025-05-17 00:24:46.985 [INFO][4768] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.130/26] IPv6=[] ContainerID="7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" HandleID="k8s-pod-network.7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:24:47.011309 containerd[1976]: 2025-05-17 00:24:46.987 [INFO][4756] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-w4ggj" WorkloadEndpoint="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"3edbec67-a280-4b9a-b567-9942c66f18d0", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"", Pod:"goldmane-78d55f7ddc-w4ggj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ea7b627f8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:47.011309 containerd[1976]: 2025-05-17 00:24:46.987 [INFO][4756] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.130/32] ContainerID="7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-w4ggj" WorkloadEndpoint="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:24:47.011309 containerd[1976]: 2025-05-17 00:24:46.987 [INFO][4756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ea7b627f8b ContainerID="7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-w4ggj" WorkloadEndpoint="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:24:47.011309 containerd[1976]: 2025-05-17 00:24:46.992 [INFO][4756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-w4ggj" WorkloadEndpoint="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:24:47.011309 containerd[1976]: 2025-05-17 00:24:46.993 [INFO][4756] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-w4ggj" WorkloadEndpoint="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"3edbec67-a280-4b9a-b567-9942c66f18d0", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b", Pod:"goldmane-78d55f7ddc-w4ggj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ea7b627f8b", MAC:"42:8e:7e:b1:85:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:47.011309 containerd[1976]: 2025-05-17 00:24:47.007 [INFO][4756] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-w4ggj" WorkloadEndpoint="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:24:47.030313 containerd[1976]: time="2025-05-17T00:24:47.029778960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:47.030313 containerd[1976]: time="2025-05-17T00:24:47.029846120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:47.030313 containerd[1976]: time="2025-05-17T00:24:47.029860910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:47.030313 containerd[1976]: time="2025-05-17T00:24:47.029947207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:47.055736 systemd[1]: Started cri-containerd-7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b.scope - libcontainer container 7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b. May 17 00:24:47.107525 kubelet[3161]: E0517 00:24:47.107452 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-d96dfd79b-fl892" podUID="9e29c649-bade-4daa-bb31-67432210eca8" May 17 00:24:47.109507 containerd[1976]: time="2025-05-17T00:24:47.109230864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-w4ggj,Uid:3edbec67-a280-4b9a-b567-9942c66f18d0,Namespace:calico-system,Attempt:1,} returns sandbox id \"7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b\"" May 17 00:24:47.110509 containerd[1976]: time="2025-05-17T00:24:47.110487115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:24:47.347279 containerd[1976]: time="2025-05-17T00:24:47.346844140Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:47.349032 containerd[1976]: time="2025-05-17T00:24:47.348989416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:47.349193 containerd[1976]: time="2025-05-17T00:24:47.348996388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:24:47.349606 kubelet[3161]: E0517 00:24:47.349355 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:47.349606 kubelet[3161]: E0517 00:24:47.349407 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:47.349748 kubelet[3161]: E0517 00:24:47.349696 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcgrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-w4ggj_calico-system(3edbec67-a280-4b9a-b567-9942c66f18d0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:47.350924 kubelet[3161]: E0517 00:24:47.350855 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:24:47.728854 systemd-networkd[1837]: cali059bf3e4366: Gained IPv6LL May 17 00:24:47.822135 containerd[1976]: time="2025-05-17T00:24:47.821840809Z" level=info msg="StopPodSandbox for \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\"" May 17 00:24:47.822419 containerd[1976]: time="2025-05-17T00:24:47.822334026Z" level=info msg="StopPodSandbox for \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\"" May 17 00:24:47.824468 containerd[1976]: time="2025-05-17T00:24:47.824075335Z" level=info msg="StopPodSandbox for \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\"" May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.895 [INFO][4874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.895 [INFO][4874] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" iface="eth0" netns="/var/run/netns/cni-15a2caf2-ac5e-1acb-b168-602ee5d4f760" May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.896 [INFO][4874] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" iface="eth0" netns="/var/run/netns/cni-15a2caf2-ac5e-1acb-b168-602ee5d4f760" May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.899 [INFO][4874] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" iface="eth0" netns="/var/run/netns/cni-15a2caf2-ac5e-1acb-b168-602ee5d4f760" May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.899 [INFO][4874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.899 [INFO][4874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.933 [INFO][4899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" HandleID="k8s-pod-network.9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.933 [INFO][4899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.933 [INFO][4899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.945 [WARNING][4899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" HandleID="k8s-pod-network.9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.945 [INFO][4899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" HandleID="k8s-pod-network.9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.947 [INFO][4899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:47.953345 containerd[1976]: 2025-05-17 00:24:47.951 [INFO][4874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:24:47.954471 containerd[1976]: time="2025-05-17T00:24:47.953975814Z" level=info msg="TearDown network for sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\" successfully" May 17 00:24:47.954471 containerd[1976]: time="2025-05-17T00:24:47.954012460Z" level=info msg="StopPodSandbox for \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\" returns successfully" May 17 00:24:47.955088 containerd[1976]: time="2025-05-17T00:24:47.954908096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58f54d8566-6bhlt,Uid:4c7db054-059f-46a4-9fc7-ca1358ceaf57,Namespace:calico-system,Attempt:1,}" May 17 00:24:47.958827 systemd[1]: run-netns-cni\x2d15a2caf2\x2dac5e\x2d1acb\x2db168\x2d602ee5d4f760.mount: Deactivated successfully. May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.892 [INFO][4867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.893 [INFO][4867] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" iface="eth0" netns="/var/run/netns/cni-82c33253-e181-dec4-8ac0-f752b60f1037" May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.894 [INFO][4867] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" iface="eth0" netns="/var/run/netns/cni-82c33253-e181-dec4-8ac0-f752b60f1037" May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.894 [INFO][4867] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" iface="eth0" netns="/var/run/netns/cni-82c33253-e181-dec4-8ac0-f752b60f1037" May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.894 [INFO][4867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.894 [INFO][4867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.947 [INFO][4893] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" HandleID="k8s-pod-network.d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.948 [INFO][4893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.948 [INFO][4893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.959 [WARNING][4893] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" HandleID="k8s-pod-network.d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.960 [INFO][4893] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" HandleID="k8s-pod-network.d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.962 [INFO][4893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:47.968278 containerd[1976]: 2025-05-17 00:24:47.965 [INFO][4867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:24:47.969969 containerd[1976]: time="2025-05-17T00:24:47.968600958Z" level=info msg="TearDown network for sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\" successfully" May 17 00:24:47.969969 containerd[1976]: time="2025-05-17T00:24:47.968624978Z" level=info msg="StopPodSandbox for \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\" returns successfully" May 17 00:24:47.970849 containerd[1976]: time="2025-05-17T00:24:47.970708513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7knxl,Uid:154c5300-472e-444e-8595-31315d3f4aee,Namespace:calico-system,Attempt:1,}" May 17 00:24:47.972364 systemd[1]: run-netns-cni\x2d82c33253\x2de181\x2ddec4\x2d8ac0\x2df752b60f1037.mount: Deactivated successfully. May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.912 [INFO][4876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.913 [INFO][4876] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" iface="eth0" netns="/var/run/netns/cni-9abd8934-3c73-58ce-1785-0517be25f3a8" May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.913 [INFO][4876] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" iface="eth0" netns="/var/run/netns/cni-9abd8934-3c73-58ce-1785-0517be25f3a8" May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.913 [INFO][4876] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" iface="eth0" netns="/var/run/netns/cni-9abd8934-3c73-58ce-1785-0517be25f3a8" May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.913 [INFO][4876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.913 [INFO][4876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.951 [INFO][4904] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" HandleID="k8s-pod-network.e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.951 [INFO][4904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.962 [INFO][4904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.979 [WARNING][4904] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" HandleID="k8s-pod-network.e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.979 [INFO][4904] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" HandleID="k8s-pod-network.e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.982 [INFO][4904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:48.020709 containerd[1976]: 2025-05-17 00:24:47.989 [INFO][4876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:24:48.023862 containerd[1976]: time="2025-05-17T00:24:48.021164442Z" level=info msg="TearDown network for sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\" successfully" May 17 00:24:48.023862 containerd[1976]: time="2025-05-17T00:24:48.021198900Z" level=info msg="StopPodSandbox for \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\" returns successfully" May 17 00:24:48.023862 containerd[1976]: time="2025-05-17T00:24:48.022922660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8649d85dd-rkbxs,Uid:ddf692d9-2f7b-48c5-85a5-b8c1de84fd75,Namespace:calico-apiserver,Attempt:1,}" May 17 00:24:48.114563 kubelet[3161]: E0517 00:24:48.114282 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:24:48.117091 kubelet[3161]: E0517 00:24:48.116989 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-d96dfd79b-fl892" podUID="9e29c649-bade-4daa-bb31-67432210eca8" May 17 00:24:48.395422 systemd-networkd[1837]: calid3525e8a027: Link UP May 17 00:24:48.395772 systemd-networkd[1837]: calid3525e8a027: Gained carrier May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.059 [INFO][4914] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.091 [INFO][4914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0 calico-kube-controllers-58f54d8566- calico-system 4c7db054-059f-46a4-9fc7-ca1358ceaf57 907 0 2025-05-17 00:24:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58f54d8566 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-208 calico-kube-controllers-58f54d8566-6bhlt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid3525e8a027 [] [] }} ContainerID="fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" Namespace="calico-system" Pod="calico-kube-controllers-58f54d8566-6bhlt" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.091 [INFO][4914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" Namespace="calico-system" Pod="calico-kube-controllers-58f54d8566-6bhlt" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.252 [INFO][4952] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" HandleID="k8s-pod-network.fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.257 [INFO][4952] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" HandleID="k8s-pod-network.fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9db0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-208", "pod":"calico-kube-controllers-58f54d8566-6bhlt", "timestamp":"2025-05-17 00:24:48.252310885 +0000 UTC"}, Hostname:"ip-172-31-18-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.257 [INFO][4952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.257 [INFO][4952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.257 [INFO][4952] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-208' May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.312 [INFO][4952] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" host="ip-172-31-18-208" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.331 [INFO][4952] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-208" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.350 [INFO][4952] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.353 [INFO][4952] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.358 [INFO][4952] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.358 [INFO][4952] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" host="ip-172-31-18-208" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.360 [INFO][4952] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32 May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.368 [INFO][4952] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" host="ip-172-31-18-208" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.378 [INFO][4952] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.131/26] block=192.168.106.128/26 handle="k8s-pod-network.fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" host="ip-172-31-18-208" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.378 [INFO][4952] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.131/26] handle="k8s-pod-network.fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" host="ip-172-31-18-208" May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.378 [INFO][4952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:48.427425 containerd[1976]: 2025-05-17 00:24:48.378 [INFO][4952] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.131/26] IPv6=[] ContainerID="fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" HandleID="k8s-pod-network.fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:24:48.428703 containerd[1976]: 2025-05-17 00:24:48.387 [INFO][4914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" Namespace="calico-system" Pod="calico-kube-controllers-58f54d8566-6bhlt" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0", GenerateName:"calico-kube-controllers-58f54d8566-", Namespace:"calico-system", SelfLink:"", UID:"4c7db054-059f-46a4-9fc7-ca1358ceaf57", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58f54d8566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"", Pod:"calico-kube-controllers-58f54d8566-6bhlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3525e8a027", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:48.428703 containerd[1976]: 2025-05-17 00:24:48.388 [INFO][4914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.131/32] ContainerID="fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" Namespace="calico-system" Pod="calico-kube-controllers-58f54d8566-6bhlt" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:24:48.428703 containerd[1976]: 2025-05-17 00:24:48.388 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3525e8a027 ContainerID="fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" Namespace="calico-system" Pod="calico-kube-controllers-58f54d8566-6bhlt" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:24:48.428703 containerd[1976]: 2025-05-17 00:24:48.394 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" Namespace="calico-system" Pod="calico-kube-controllers-58f54d8566-6bhlt" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:24:48.428703 containerd[1976]: 2025-05-17 00:24:48.394 [INFO][4914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" Namespace="calico-system" Pod="calico-kube-controllers-58f54d8566-6bhlt" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0", GenerateName:"calico-kube-controllers-58f54d8566-", Namespace:"calico-system", SelfLink:"", UID:"4c7db054-059f-46a4-9fc7-ca1358ceaf57", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58f54d8566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32", Pod:"calico-kube-controllers-58f54d8566-6bhlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3525e8a027", MAC:"3a:30:cc:6a:03:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:48.428703 containerd[1976]: 2025-05-17 00:24:48.422 [INFO][4914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32" Namespace="calico-system" Pod="calico-kube-controllers-58f54d8566-6bhlt" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:24:48.490705 containerd[1976]: time="2025-05-17T00:24:48.489306147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:48.490705 containerd[1976]: time="2025-05-17T00:24:48.489910958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:48.490705 containerd[1976]: time="2025-05-17T00:24:48.489944390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:48.490705 containerd[1976]: time="2025-05-17T00:24:48.490083288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:48.545227 systemd-networkd[1837]: calib350a564f8b: Link UP May 17 00:24:48.553346 systemd-networkd[1837]: calib350a564f8b: Gained carrier May 17 00:24:48.608379 systemd[1]: Started cri-containerd-fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32.scope - libcontainer container fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32. May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.080 [INFO][4925] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.128 [INFO][4925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0 csi-node-driver- calico-system 154c5300-472e-444e-8595-31315d3f4aee 906 0 2025-05-17 00:24:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-208 csi-node-driver-7knxl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib350a564f8b [] [] }} ContainerID="32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" Namespace="calico-system" Pod="csi-node-driver-7knxl" WorkloadEndpoint="ip--172--31--18--208-k8s-csi--node--driver--7knxl-" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.128 [INFO][4925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" Namespace="calico-system" Pod="csi-node-driver-7knxl" WorkloadEndpoint="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.273 [INFO][4957] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" HandleID="k8s-pod-network.32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.274 [INFO][4957] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" HandleID="k8s-pod-network.32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000210fa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-208", "pod":"csi-node-driver-7knxl", "timestamp":"2025-05-17 00:24:48.273784949 +0000 UTC"}, Hostname:"ip-172-31-18-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.274 [INFO][4957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.384 [INFO][4957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.384 [INFO][4957] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-208' May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.416 [INFO][4957] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" host="ip-172-31-18-208" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.430 [INFO][4957] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-208" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.451 [INFO][4957] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.455 [INFO][4957] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.462 [INFO][4957] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.462 [INFO][4957] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" host="ip-172-31-18-208" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.465 [INFO][4957] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.480 [INFO][4957] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" host="ip-172-31-18-208" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.502 [INFO][4957] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.132/26] block=192.168.106.128/26 handle="k8s-pod-network.32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" host="ip-172-31-18-208" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.503 [INFO][4957] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.132/26] handle="k8s-pod-network.32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" host="ip-172-31-18-208" May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.504 [INFO][4957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:48.628869 containerd[1976]: 2025-05-17 00:24:48.505 [INFO][4957] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.132/26] IPv6=[] ContainerID="32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" HandleID="k8s-pod-network.32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:24:48.631989 containerd[1976]: 2025-05-17 00:24:48.520 [INFO][4925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" Namespace="calico-system" Pod="csi-node-driver-7knxl" WorkloadEndpoint="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"154c5300-472e-444e-8595-31315d3f4aee", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"", Pod:"csi-node-driver-7knxl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib350a564f8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:48.631989 containerd[1976]: 2025-05-17 00:24:48.523 [INFO][4925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.132/32] ContainerID="32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" Namespace="calico-system" Pod="csi-node-driver-7knxl" WorkloadEndpoint="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:24:48.631989 containerd[1976]: 2025-05-17 00:24:48.524 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib350a564f8b ContainerID="32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" Namespace="calico-system" Pod="csi-node-driver-7knxl" WorkloadEndpoint="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:24:48.631989 containerd[1976]: 2025-05-17 00:24:48.560 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" Namespace="calico-system" Pod="csi-node-driver-7knxl" WorkloadEndpoint="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:24:48.631989 containerd[1976]: 2025-05-17 00:24:48.561 [INFO][4925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" Namespace="calico-system" Pod="csi-node-driver-7knxl" WorkloadEndpoint="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"154c5300-472e-444e-8595-31315d3f4aee", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c", Pod:"csi-node-driver-7knxl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib350a564f8b", MAC:"d2:5e:31:aa:d8:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:48.631989 containerd[1976]: 2025-05-17 00:24:48.621 [INFO][4925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c" Namespace="calico-system" Pod="csi-node-driver-7knxl" WorkloadEndpoint="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:24:48.641738 systemd-networkd[1837]: cali7f3eae56384: Link UP May 17 00:24:48.644625 systemd-networkd[1837]: cali7f3eae56384: Gained carrier May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.135 [INFO][4936] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.170 [INFO][4936] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0 calico-apiserver-8649d85dd- calico-apiserver ddf692d9-2f7b-48c5-85a5-b8c1de84fd75 908 0 2025-05-17 00:24:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8649d85dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-208 calico-apiserver-8649d85dd-rkbxs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7f3eae56384 [] [] }} ContainerID="63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-rkbxs" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.170 [INFO][4936] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-rkbxs" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.329 [INFO][4965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" HandleID="k8s-pod-network.63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.330 [INFO][4965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" HandleID="k8s-pod-network.63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-208", "pod":"calico-apiserver-8649d85dd-rkbxs", "timestamp":"2025-05-17 00:24:48.328152264 +0000 UTC"}, Hostname:"ip-172-31-18-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.331 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.504 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.504 [INFO][4965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-208' May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.518 [INFO][4965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" host="ip-172-31-18-208" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.530 [INFO][4965] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-208" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.551 [INFO][4965] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.560 [INFO][4965] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.568 [INFO][4965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.568 [INFO][4965] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" host="ip-172-31-18-208" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.574 [INFO][4965] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767 May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.603 [INFO][4965] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" host="ip-172-31-18-208" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.619 [INFO][4965] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.133/26] block=192.168.106.128/26 handle="k8s-pod-network.63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" host="ip-172-31-18-208" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.620 [INFO][4965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.133/26] handle="k8s-pod-network.63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" host="ip-172-31-18-208" May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.621 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:48.675225 containerd[1976]: 2025-05-17 00:24:48.621 [INFO][4965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.133/26] IPv6=[] ContainerID="63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" HandleID="k8s-pod-network.63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:24:48.676438 containerd[1976]: 2025-05-17 00:24:48.630 [INFO][4936] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-rkbxs" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0", GenerateName:"calico-apiserver-8649d85dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddf692d9-2f7b-48c5-85a5-b8c1de84fd75", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8649d85dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"", Pod:"calico-apiserver-8649d85dd-rkbxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f3eae56384", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:48.676438 containerd[1976]: 2025-05-17 00:24:48.631 [INFO][4936] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.133/32] ContainerID="63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-rkbxs" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:24:48.676438 containerd[1976]: 2025-05-17 00:24:48.631 [INFO][4936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f3eae56384 ContainerID="63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-rkbxs" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:24:48.676438 containerd[1976]: 2025-05-17 00:24:48.646 [INFO][4936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-rkbxs" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:24:48.676438 containerd[1976]: 2025-05-17 00:24:48.647 [INFO][4936] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-rkbxs" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0", GenerateName:"calico-apiserver-8649d85dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddf692d9-2f7b-48c5-85a5-b8c1de84fd75", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8649d85dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767", Pod:"calico-apiserver-8649d85dd-rkbxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f3eae56384", MAC:"4a:a9:50:fd:2b:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:48.676438 containerd[1976]: 2025-05-17 00:24:48.671 [INFO][4936] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-rkbxs" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:24:48.690815 systemd-networkd[1837]: cali6ea7b627f8b: Gained IPv6LL May 17 00:24:48.725713 systemd[1]: Started sshd@7-172.31.18.208:22-147.75.109.163:50732.service - OpenSSH per-connection server daemon (147.75.109.163:50732). May 17 00:24:48.767794 containerd[1976]: time="2025-05-17T00:24:48.765343123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:48.768040 containerd[1976]: time="2025-05-17T00:24:48.768001691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:48.768219 containerd[1976]: time="2025-05-17T00:24:48.768187923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:48.768739 containerd[1976]: time="2025-05-17T00:24:48.768691071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:48.801514 systemd[1]: Started cri-containerd-32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c.scope - libcontainer container 32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c. May 17 00:24:48.922046 containerd[1976]: time="2025-05-17T00:24:48.920300146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:48.922046 containerd[1976]: time="2025-05-17T00:24:48.920376437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:48.922046 containerd[1976]: time="2025-05-17T00:24:48.920409644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:48.925609 containerd[1976]: time="2025-05-17T00:24:48.925358564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:48.932616 containerd[1976]: time="2025-05-17T00:24:48.932567434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58f54d8566-6bhlt,Uid:4c7db054-059f-46a4-9fc7-ca1358ceaf57,Namespace:calico-system,Attempt:1,} returns sandbox id \"fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32\"" May 17 00:24:48.939738 containerd[1976]: time="2025-05-17T00:24:48.939700044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:24:48.965258 systemd[1]: run-netns-cni\x2d9abd8934\x2d3c73\x2d58ce\x2d1785\x2d0517be25f3a8.mount: Deactivated successfully. May 17 00:24:49.006193 systemd[1]: Started cri-containerd-63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767.scope - libcontainer container 63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767. May 17 00:24:49.046217 containerd[1976]: time="2025-05-17T00:24:49.045764125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7knxl,Uid:154c5300-472e-444e-8595-31315d3f4aee,Namespace:calico-system,Attempt:1,} returns sandbox id \"32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c\"" May 17 00:24:49.074125 sshd[5053]: Accepted publickey for core from 147.75.109.163 port 50732 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:24:49.078049 sshd[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:49.086734 systemd-logind[1960]: New session 8 of user core. May 17 00:24:49.094688 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:24:49.112519 containerd[1976]: time="2025-05-17T00:24:49.112477464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8649d85dd-rkbxs,Uid:ddf692d9-2f7b-48c5-85a5-b8c1de84fd75,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767\"" May 17 00:24:49.122909 kubelet[3161]: E0517 00:24:49.122050 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:24:49.367413 kubelet[3161]: I0517 00:24:49.367355 3161 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:49.696582 kernel: bpftool[5168]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:24:49.774837 sshd[5053]: pam_unix(sshd:session): session closed for user core May 17 00:24:49.782907 systemd[1]: sshd@7-172.31.18.208:22-147.75.109.163:50732.service: Deactivated successfully. May 17 00:24:49.786832 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:24:49.788786 systemd-logind[1960]: Session 8 logged out. Waiting for processes to exit. May 17 00:24:49.790580 systemd-logind[1960]: Removed session 8. May 17 00:24:49.840897 systemd-networkd[1837]: calid3525e8a027: Gained IPv6LL May 17 00:24:49.905656 systemd-networkd[1837]: cali7f3eae56384: Gained IPv6LL May 17 00:24:50.158476 systemd-networkd[1837]: vxlan.calico: Link UP May 17 00:24:50.158487 systemd-networkd[1837]: vxlan.calico: Gained carrier May 17 00:24:50.231995 (udev-worker)[4641]: Network interface NamePolicy= disabled on kernel command line. May 17 00:24:50.288854 systemd-networkd[1837]: calib350a564f8b: Gained IPv6LL May 17 00:24:50.777406 containerd[1976]: time="2025-05-17T00:24:50.777085024Z" level=info msg="StopPodSandbox for \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\"" May 17 00:24:50.778823 containerd[1976]: time="2025-05-17T00:24:50.778714075Z" level=info msg="StopPodSandbox for \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\"" May 17 00:24:50.779436 containerd[1976]: time="2025-05-17T00:24:50.778718013Z" level=info msg="StopPodSandbox for \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\"" May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.868 [INFO][5307] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.868 [INFO][5307] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" iface="eth0" netns="/var/run/netns/cni-63cd00b4-cb02-222d-ce46-c26694dd900d" May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.870 [INFO][5307] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" iface="eth0" netns="/var/run/netns/cni-63cd00b4-cb02-222d-ce46-c26694dd900d" May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.873 [INFO][5307] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" iface="eth0" netns="/var/run/netns/cni-63cd00b4-cb02-222d-ce46-c26694dd900d" May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.873 [INFO][5307] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.873 [INFO][5307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.929 [INFO][5325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" HandleID="k8s-pod-network.1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.930 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.930 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.945 [WARNING][5325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" HandleID="k8s-pod-network.1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.946 [INFO][5325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" HandleID="k8s-pod-network.1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.949 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:50.961452 containerd[1976]: 2025-05-17 00:24:50.957 [INFO][5307] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:24:50.966432 containerd[1976]: time="2025-05-17T00:24:50.961883735Z" level=info msg="TearDown network for sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\" successfully" May 17 00:24:50.966432 containerd[1976]: time="2025-05-17T00:24:50.961999367Z" level=info msg="StopPodSandbox for \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\" returns successfully" May 17 00:24:50.966432 containerd[1976]: time="2025-05-17T00:24:50.962933233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-66bvn,Uid:9cde766c-cf7a-4494-a1ab-ccbb03aa389f,Namespace:kube-system,Attempt:1,}" May 17 00:24:50.966349 systemd[1]: run-netns-cni\x2d63cd00b4\x2dcb02\x2d222d\x2dce46\x2dc26694dd900d.mount: Deactivated successfully. May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.923 [INFO][5308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.924 [INFO][5308] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" iface="eth0" netns="/var/run/netns/cni-3a54d791-3189-c058-c6c4-5b9057e3a5fb" May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.924 [INFO][5308] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" iface="eth0" netns="/var/run/netns/cni-3a54d791-3189-c058-c6c4-5b9057e3a5fb" May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.925 [INFO][5308] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" iface="eth0" netns="/var/run/netns/cni-3a54d791-3189-c058-c6c4-5b9057e3a5fb" May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.925 [INFO][5308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.925 [INFO][5308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.974 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" HandleID="k8s-pod-network.49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.974 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.974 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.980 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" HandleID="k8s-pod-network.49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.980 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" HandleID="k8s-pod-network.49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.982 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:50.986874 containerd[1976]: 2025-05-17 00:24:50.985 [INFO][5308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:24:50.991649 containerd[1976]: time="2025-05-17T00:24:50.991613070Z" level=info msg="TearDown network for sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\" successfully" May 17 00:24:50.991774 containerd[1976]: time="2025-05-17T00:24:50.991759908Z" level=info msg="StopPodSandbox for \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\" returns successfully" May 17 00:24:50.992571 containerd[1976]: time="2025-05-17T00:24:50.992420709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8649d85dd-zpwmv,Uid:891b95b4-9f23-4ca3-aa2b-1578acf454d2,Namespace:calico-apiserver,Attempt:1,}" May 17 00:24:50.993261 systemd[1]: run-netns-cni\x2d3a54d791\x2d3189\x2dc058\x2dc6c4\x2d5b9057e3a5fb.mount: Deactivated successfully. May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:50.944 [INFO][5314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:50.944 [INFO][5314] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" iface="eth0" netns="/var/run/netns/cni-2ef98ba1-536c-83c0-0c4b-7a821f9290fc" May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:50.944 [INFO][5314] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" iface="eth0" netns="/var/run/netns/cni-2ef98ba1-536c-83c0-0c4b-7a821f9290fc" May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:50.946 [INFO][5314] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" iface="eth0" netns="/var/run/netns/cni-2ef98ba1-536c-83c0-0c4b-7a821f9290fc" May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:50.946 [INFO][5314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:50.946 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:50.999 [INFO][5342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" HandleID="k8s-pod-network.624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:50.999 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:50.999 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:51.006 [WARNING][5342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" HandleID="k8s-pod-network.624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:51.006 [INFO][5342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" HandleID="k8s-pod-network.624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:51.008 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:51.012380 containerd[1976]: 2025-05-17 00:24:51.010 [INFO][5314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:24:51.015485 containerd[1976]: time="2025-05-17T00:24:51.015444579Z" level=info msg="TearDown network for sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\" successfully" May 17 00:24:51.015485 containerd[1976]: time="2025-05-17T00:24:51.015480524Z" level=info msg="StopPodSandbox for \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\" returns successfully" May 17 00:24:51.015888 systemd[1]: run-netns-cni\x2d2ef98ba1\x2d536c\x2d83c0\x2d0c4b\x2d7a821f9290fc.mount: Deactivated successfully. May 17 00:24:51.016474 containerd[1976]: time="2025-05-17T00:24:51.016333591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-klpmb,Uid:e41e279e-d875-4866-b909-66b33f148bb6,Namespace:kube-system,Attempt:1,}" May 17 00:24:51.274782 (udev-worker)[5221]: Network interface NamePolicy= disabled on kernel command line. May 17 00:24:51.277523 systemd-networkd[1837]: cali07082215a3c: Link UP May 17 00:24:51.278670 systemd-networkd[1837]: cali07082215a3c: Gained carrier May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.137 [INFO][5376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0 coredns-668d6bf9bc- kube-system e41e279e-d875-4866-b909-66b33f148bb6 991 0 2025-05-17 00:24:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-208 coredns-668d6bf9bc-klpmb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali07082215a3c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" Namespace="kube-system" Pod="coredns-668d6bf9bc-klpmb" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.138 [INFO][5376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" Namespace="kube-system" Pod="coredns-668d6bf9bc-klpmb" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.195 [INFO][5399] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" HandleID="k8s-pod-network.66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.196 [INFO][5399] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" HandleID="k8s-pod-network.66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9b90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-208", "pod":"coredns-668d6bf9bc-klpmb", "timestamp":"2025-05-17 00:24:51.195942296 +0000 UTC"}, Hostname:"ip-172-31-18-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.196 [INFO][5399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.196 [INFO][5399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.196 [INFO][5399] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-208' May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.222 [INFO][5399] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" host="ip-172-31-18-208" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.230 [INFO][5399] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-208" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.239 [INFO][5399] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.243 [INFO][5399] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.247 [INFO][5399] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.247 [INFO][5399] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" host="ip-172-31-18-208" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.249 [INFO][5399] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.256 [INFO][5399] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" host="ip-172-31-18-208" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.264 [INFO][5399] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.134/26] block=192.168.106.128/26 handle="k8s-pod-network.66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" host="ip-172-31-18-208" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.264 [INFO][5399] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.134/26] handle="k8s-pod-network.66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" host="ip-172-31-18-208" May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.264 [INFO][5399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:51.299761 containerd[1976]: 2025-05-17 00:24:51.265 [INFO][5399] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.134/26] IPv6=[] ContainerID="66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" HandleID="k8s-pod-network.66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:24:51.300384 containerd[1976]: 2025-05-17 00:24:51.270 [INFO][5376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" Namespace="kube-system" Pod="coredns-668d6bf9bc-klpmb" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e41e279e-d875-4866-b909-66b33f148bb6", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"", Pod:"coredns-668d6bf9bc-klpmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07082215a3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:51.300384 containerd[1976]: 2025-05-17 00:24:51.271 [INFO][5376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.134/32] ContainerID="66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" Namespace="kube-system" Pod="coredns-668d6bf9bc-klpmb" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:24:51.300384 containerd[1976]: 2025-05-17 00:24:51.271 [INFO][5376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07082215a3c ContainerID="66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" Namespace="kube-system" Pod="coredns-668d6bf9bc-klpmb" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:24:51.300384 containerd[1976]: 2025-05-17 00:24:51.278 [INFO][5376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" Namespace="kube-system" Pod="coredns-668d6bf9bc-klpmb" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:24:51.300384 containerd[1976]: 2025-05-17 00:24:51.278 [INFO][5376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" Namespace="kube-system" Pod="coredns-668d6bf9bc-klpmb" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e41e279e-d875-4866-b909-66b33f148bb6", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a", Pod:"coredns-668d6bf9bc-klpmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07082215a3c", MAC:"5a:fa:31:7a:b4:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:51.300384 containerd[1976]: 2025-05-17 00:24:51.296 [INFO][5376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a" Namespace="kube-system" Pod="coredns-668d6bf9bc-klpmb" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:24:51.376082 containerd[1976]: time="2025-05-17T00:24:51.375727092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:51.376082 containerd[1976]: time="2025-05-17T00:24:51.375821951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:51.376082 containerd[1976]: time="2025-05-17T00:24:51.375845373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:51.376082 containerd[1976]: time="2025-05-17T00:24:51.375974395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:51.415046 systemd-networkd[1837]: cali0950082a7f3: Link UP May 17 00:24:51.415325 systemd-networkd[1837]: cali0950082a7f3: Gained carrier May 17 00:24:51.458309 systemd[1]: Started cri-containerd-66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a.scope - libcontainer container 66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a. May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.134 [INFO][5354] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0 coredns-668d6bf9bc- kube-system 9cde766c-cf7a-4494-a1ab-ccbb03aa389f 987 0 2025-05-17 00:24:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-208 coredns-668d6bf9bc-66bvn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0950082a7f3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" Namespace="kube-system" Pod="coredns-668d6bf9bc-66bvn" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.135 [INFO][5354] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" Namespace="kube-system" Pod="coredns-668d6bf9bc-66bvn" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.244 [INFO][5392] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" HandleID="k8s-pod-network.fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.245 [INFO][5392] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" HandleID="k8s-pod-network.fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000327d00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-208", "pod":"coredns-668d6bf9bc-66bvn", "timestamp":"2025-05-17 00:24:51.244645212 +0000 UTC"}, Hostname:"ip-172-31-18-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.245 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.265 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.265 [INFO][5392] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-208' May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.325 [INFO][5392] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" host="ip-172-31-18-208" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.335 [INFO][5392] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-208" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.344 [INFO][5392] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.348 [INFO][5392] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.353 [INFO][5392] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.353 [INFO][5392] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" host="ip-172-31-18-208" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.356 [INFO][5392] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52 May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.366 [INFO][5392] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" host="ip-172-31-18-208" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.385 [INFO][5392] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.135/26] block=192.168.106.128/26 handle="k8s-pod-network.fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" host="ip-172-31-18-208" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.385 [INFO][5392] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.135/26] handle="k8s-pod-network.fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" host="ip-172-31-18-208" May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.385 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:51.465093 containerd[1976]: 2025-05-17 00:24:51.386 [INFO][5392] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.135/26] IPv6=[] ContainerID="fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" HandleID="k8s-pod-network.fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:24:51.467967 containerd[1976]: 2025-05-17 00:24:51.402 [INFO][5354] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" Namespace="kube-system" Pod="coredns-668d6bf9bc-66bvn" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9cde766c-cf7a-4494-a1ab-ccbb03aa389f", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"", Pod:"coredns-668d6bf9bc-66bvn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0950082a7f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:51.467967 containerd[1976]: 2025-05-17 00:24:51.404 [INFO][5354] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.135/32] ContainerID="fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" Namespace="kube-system" Pod="coredns-668d6bf9bc-66bvn" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:24:51.467967 containerd[1976]: 2025-05-17 00:24:51.405 [INFO][5354] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0950082a7f3 ContainerID="fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" Namespace="kube-system" Pod="coredns-668d6bf9bc-66bvn" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:24:51.467967 containerd[1976]: 2025-05-17 00:24:51.417 [INFO][5354] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" Namespace="kube-system" Pod="coredns-668d6bf9bc-66bvn" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:24:51.467967 containerd[1976]: 2025-05-17 00:24:51.418 [INFO][5354] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" Namespace="kube-system" Pod="coredns-668d6bf9bc-66bvn" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9cde766c-cf7a-4494-a1ab-ccbb03aa389f", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52", Pod:"coredns-668d6bf9bc-66bvn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0950082a7f3", MAC:"2a:e0:95:04:d2:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:51.467967 containerd[1976]: 2025-05-17 00:24:51.453 [INFO][5354] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52" Namespace="kube-system" Pod="coredns-668d6bf9bc-66bvn" WorkloadEndpoint="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:24:51.590573 containerd[1976]: time="2025-05-17T00:24:51.587226245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:51.590573 containerd[1976]: time="2025-05-17T00:24:51.587306502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:51.590573 containerd[1976]: time="2025-05-17T00:24:51.587323700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:51.590573 containerd[1976]: time="2025-05-17T00:24:51.587435240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:51.682285 systemd-networkd[1837]: cali5ffcb7f1eb8: Link UP May 17 00:24:51.683760 systemd[1]: Started cri-containerd-fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52.scope - libcontainer container fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52. May 17 00:24:51.686984 systemd-networkd[1837]: cali5ffcb7f1eb8: Gained carrier May 17 00:24:51.717722 containerd[1976]: time="2025-05-17T00:24:51.717672543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-klpmb,Uid:e41e279e-d875-4866-b909-66b33f148bb6,Namespace:kube-system,Attempt:1,} returns sandbox id \"66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a\"" May 17 00:24:51.731745 containerd[1976]: time="2025-05-17T00:24:51.729516912Z" level=info msg="CreateContainer within sandbox \"66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.163 [INFO][5366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0 calico-apiserver-8649d85dd- calico-apiserver 891b95b4-9f23-4ca3-aa2b-1578acf454d2 989 0 2025-05-17 00:24:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8649d85dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-208 calico-apiserver-8649d85dd-zpwmv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ffcb7f1eb8 [] [] }} ContainerID="6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-zpwmv" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.163 [INFO][5366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-zpwmv" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.248 [INFO][5405] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" HandleID="k8s-pod-network.6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.248 [INFO][5405] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" HandleID="k8s-pod-network.6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000395cc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-208", "pod":"calico-apiserver-8649d85dd-zpwmv", "timestamp":"2025-05-17 00:24:51.248666769 +0000 UTC"}, Hostname:"ip-172-31-18-208", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.248 [INFO][5405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.386 [INFO][5405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.386 [INFO][5405] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-208' May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.441 [INFO][5405] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" host="ip-172-31-18-208" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.473 [INFO][5405] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-208" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.496 [INFO][5405] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.505 [INFO][5405] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.532 [INFO][5405] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ip-172-31-18-208" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.541 [INFO][5405] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" host="ip-172-31-18-208" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.573 [INFO][5405] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.592 [INFO][5405] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" host="ip-172-31-18-208" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.640 [INFO][5405] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.136/26] block=192.168.106.128/26 handle="k8s-pod-network.6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" host="ip-172-31-18-208" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.640 [INFO][5405] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.136/26] handle="k8s-pod-network.6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" host="ip-172-31-18-208" May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.640 [INFO][5405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:51.754319 containerd[1976]: 2025-05-17 00:24:51.640 [INFO][5405] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.136/26] IPv6=[] ContainerID="6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" HandleID="k8s-pod-network.6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:24:51.757140 containerd[1976]: 2025-05-17 00:24:51.659 [INFO][5366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-zpwmv" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0", GenerateName:"calico-apiserver-8649d85dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"891b95b4-9f23-4ca3-aa2b-1578acf454d2", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8649d85dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"", Pod:"calico-apiserver-8649d85dd-zpwmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ffcb7f1eb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:51.757140 containerd[1976]: 2025-05-17 00:24:51.659 [INFO][5366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.136/32] ContainerID="6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-zpwmv" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:24:51.757140 containerd[1976]: 2025-05-17 00:24:51.660 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ffcb7f1eb8 ContainerID="6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-zpwmv" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:24:51.757140 containerd[1976]: 2025-05-17 00:24:51.693 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-zpwmv" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:24:51.757140 containerd[1976]: 2025-05-17 00:24:51.703 [INFO][5366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-zpwmv" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0", GenerateName:"calico-apiserver-8649d85dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"891b95b4-9f23-4ca3-aa2b-1578acf454d2", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8649d85dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a", Pod:"calico-apiserver-8649d85dd-zpwmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ffcb7f1eb8", MAC:"c2:b0:35:60:e5:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:51.757140 containerd[1976]: 2025-05-17 00:24:51.742 [INFO][5366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a" Namespace="calico-apiserver" Pod="calico-apiserver-8649d85dd-zpwmv" WorkloadEndpoint="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:24:51.806261 containerd[1976]: time="2025-05-17T00:24:51.804771054Z" level=info msg="CreateContainer within sandbox \"66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0923f241c0248d25c4fc838ce5d0ce0c6d1e1bd7bc02f46ecd9993556a72d575\"" May 17 00:24:51.811130 containerd[1976]: time="2025-05-17T00:24:51.811086369Z" level=info msg="StartContainer for \"0923f241c0248d25c4fc838ce5d0ce0c6d1e1bd7bc02f46ecd9993556a72d575\"" May 17 00:24:51.842765 containerd[1976]: time="2025-05-17T00:24:51.841209271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:51.842765 containerd[1976]: time="2025-05-17T00:24:51.841300905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:51.842765 containerd[1976]: time="2025-05-17T00:24:51.841323659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:51.842765 containerd[1976]: time="2025-05-17T00:24:51.841432495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:51.898784 systemd[1]: Started cri-containerd-6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a.scope - libcontainer container 6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a. May 17 00:24:51.971784 systemd[1]: Started cri-containerd-0923f241c0248d25c4fc838ce5d0ce0c6d1e1bd7bc02f46ecd9993556a72d575.scope - libcontainer container 0923f241c0248d25c4fc838ce5d0ce0c6d1e1bd7bc02f46ecd9993556a72d575. May 17 00:24:51.996508 containerd[1976]: time="2025-05-17T00:24:51.996274241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-66bvn,Uid:9cde766c-cf7a-4494-a1ab-ccbb03aa389f,Namespace:kube-system,Attempt:1,} returns sandbox id \"fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52\"" May 17 00:24:52.015362 containerd[1976]: time="2025-05-17T00:24:52.014503534Z" level=info msg="CreateContainer within sandbox \"fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:24:52.016765 systemd-networkd[1837]: vxlan.calico: Gained IPv6LL May 17 00:24:52.052302 containerd[1976]: time="2025-05-17T00:24:52.052247754Z" level=info msg="CreateContainer within sandbox \"fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ee4d6e98658fdc39cf70a4d72eb6c03a086989d4c9bbacb41ece1887d676c9a\"" May 17 00:24:52.058607 containerd[1976]: time="2025-05-17T00:24:52.055853242Z" level=info msg="StartContainer for \"8ee4d6e98658fdc39cf70a4d72eb6c03a086989d4c9bbacb41ece1887d676c9a\"" May 17 00:24:52.158833 systemd[1]: Started cri-containerd-8ee4d6e98658fdc39cf70a4d72eb6c03a086989d4c9bbacb41ece1887d676c9a.scope - libcontainer container 8ee4d6e98658fdc39cf70a4d72eb6c03a086989d4c9bbacb41ece1887d676c9a. May 17 00:24:52.181151 containerd[1976]: time="2025-05-17T00:24:52.180732486Z" level=info msg="StartContainer for \"0923f241c0248d25c4fc838ce5d0ce0c6d1e1bd7bc02f46ecd9993556a72d575\" returns successfully" May 17 00:24:52.264665 containerd[1976]: time="2025-05-17T00:24:52.264273438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8649d85dd-zpwmv,Uid:891b95b4-9f23-4ca3-aa2b-1578acf454d2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a\"" May 17 00:24:52.287681 containerd[1976]: time="2025-05-17T00:24:52.287337085Z" level=info msg="StartContainer for \"8ee4d6e98658fdc39cf70a4d72eb6c03a086989d4c9bbacb41ece1887d676c9a\" returns successfully" May 17 00:24:52.401020 systemd-networkd[1837]: cali07082215a3c: Gained IPv6LL May 17 00:24:52.464891 systemd-networkd[1837]: cali0950082a7f3: Gained IPv6LL May 17 00:24:53.263438 kubelet[3161]: I0517 00:24:53.263360 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-66bvn" podStartSLOduration=42.251594452 podStartE2EDuration="42.251594452s" podCreationTimestamp="2025-05-17 00:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:53.249909312 +0000 UTC m=+47.592369965" watchObservedRunningTime="2025-05-17 00:24:53.251594452 +0000 UTC m=+47.594055101" May 17 00:24:53.286350 kubelet[3161]: I0517 00:24:53.285881 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-klpmb" podStartSLOduration=42.28585902 podStartE2EDuration="42.28585902s" podCreationTimestamp="2025-05-17 00:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:53.28542322 +0000 UTC m=+47.627883876" watchObservedRunningTime="2025-05-17 00:24:53.28585902 +0000 UTC m=+47.628319675" May 17 00:24:53.553011 systemd-networkd[1837]: cali5ffcb7f1eb8: Gained IPv6LL May 17 00:24:53.564118 containerd[1976]: time="2025-05-17T00:24:53.564070400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:53.568144 containerd[1976]: time="2025-05-17T00:24:53.568089053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:24:53.571576 containerd[1976]: time="2025-05-17T00:24:53.571487793Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:53.576774 containerd[1976]: time="2025-05-17T00:24:53.576691813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:53.577748 containerd[1976]: time="2025-05-17T00:24:53.577239518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 4.637494771s" May 17 00:24:53.577748 containerd[1976]: time="2025-05-17T00:24:53.577273892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:24:53.578783 containerd[1976]: time="2025-05-17T00:24:53.578313238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:24:53.586776 containerd[1976]: time="2025-05-17T00:24:53.586743048Z" level=info msg="CreateContainer within sandbox \"fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:24:53.616698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943195412.mount: Deactivated successfully. May 17 00:24:53.623875 containerd[1976]: time="2025-05-17T00:24:53.623829810Z" level=info msg="CreateContainer within sandbox \"fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2c397b5d94fafd034e66a3725c04852e055aceb54abcab31880798f7820aa610\"" May 17 00:24:53.624737 containerd[1976]: time="2025-05-17T00:24:53.624703686Z" level=info msg="StartContainer for \"2c397b5d94fafd034e66a3725c04852e055aceb54abcab31880798f7820aa610\"" May 17 00:24:53.671732 systemd[1]: Started cri-containerd-2c397b5d94fafd034e66a3725c04852e055aceb54abcab31880798f7820aa610.scope - libcontainer container 2c397b5d94fafd034e66a3725c04852e055aceb54abcab31880798f7820aa610. May 17 00:24:53.716864 containerd[1976]: time="2025-05-17T00:24:53.716816227Z" level=info msg="StartContainer for \"2c397b5d94fafd034e66a3725c04852e055aceb54abcab31880798f7820aa610\" returns successfully" May 17 00:24:54.809981 systemd[1]: Started sshd@8-172.31.18.208:22-147.75.109.163:50740.service - OpenSSH per-connection server daemon (147.75.109.163:50740). May 17 00:24:55.046556 sshd[5697]: Accepted publickey for core from 147.75.109.163 port 50740 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:24:55.051709 sshd[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:55.062346 systemd-logind[1960]: New session 9 of user core. May 17 00:24:55.066749 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:24:55.125474 containerd[1976]: time="2025-05-17T00:24:55.125416310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:55.127432 containerd[1976]: time="2025-05-17T00:24:55.127377382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:24:55.130569 containerd[1976]: time="2025-05-17T00:24:55.130275341Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:55.134850 containerd[1976]: time="2025-05-17T00:24:55.134811584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:55.136243 containerd[1976]: time="2025-05-17T00:24:55.135367753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.557008475s" May 17 00:24:55.136243 containerd[1976]: time="2025-05-17T00:24:55.135414205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:24:55.137622 containerd[1976]: time="2025-05-17T00:24:55.137594573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:24:55.142944 containerd[1976]: time="2025-05-17T00:24:55.142739675Z" level=info msg="CreateContainer within sandbox \"32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:24:55.176399 containerd[1976]: time="2025-05-17T00:24:55.176350092Z" level=info msg="CreateContainer within sandbox \"32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e8a3d2681f916471b9dbf9720c503781a1c2b03ff3ffce6e8aae513f6b15b33b\"" May 17 00:24:55.178707 containerd[1976]: time="2025-05-17T00:24:55.177422764Z" level=info msg="StartContainer for \"e8a3d2681f916471b9dbf9720c503781a1c2b03ff3ffce6e8aae513f6b15b33b\"" May 17 00:24:55.222720 systemd[1]: Started cri-containerd-e8a3d2681f916471b9dbf9720c503781a1c2b03ff3ffce6e8aae513f6b15b33b.scope - libcontainer container e8a3d2681f916471b9dbf9720c503781a1c2b03ff3ffce6e8aae513f6b15b33b. May 17 00:24:55.247733 kubelet[3161]: I0517 00:24:55.246988 3161 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:55.259057 containerd[1976]: time="2025-05-17T00:24:55.259020479Z" level=info msg="StartContainer for \"e8a3d2681f916471b9dbf9720c503781a1c2b03ff3ffce6e8aae513f6b15b33b\" returns successfully" May 17 00:24:55.815595 sshd[5697]: pam_unix(sshd:session): session closed for user core May 17 00:24:55.819388 systemd[1]: sshd@8-172.31.18.208:22-147.75.109.163:50740.service: Deactivated successfully. May 17 00:24:55.821388 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:24:55.822215 systemd-logind[1960]: Session 9 logged out. Waiting for processes to exit. May 17 00:24:55.823145 systemd-logind[1960]: Removed session 9. May 17 00:24:56.262948 ntpd[1954]: Listen normally on 8 vxlan.calico 192.168.106.128:123 May 17 00:24:56.263023 ntpd[1954]: Listen normally on 9 cali059bf3e4366 [fe80::ecee:eeff:feee:eeee%4]:123 May 17 00:24:56.264260 ntpd[1954]: 17 May 00:24:56 ntpd[1954]: Listen normally on 8 vxlan.calico 192.168.106.128:123 May 17 00:24:56.264260 ntpd[1954]: 17 May 00:24:56 ntpd[1954]: Listen normally on 9 cali059bf3e4366 [fe80::ecee:eeff:feee:eeee%4]:123 May 17 00:24:56.264260 ntpd[1954]: 17 May 00:24:56 ntpd[1954]: Listen normally on 10 cali6ea7b627f8b [fe80::ecee:eeff:feee:eeee%5]:123 May 17 00:24:56.264260 ntpd[1954]: 17 May 00:24:56 ntpd[1954]: Listen normally on 11 calid3525e8a027 [fe80::ecee:eeff:feee:eeee%6]:123 May 17 00:24:56.264260 ntpd[1954]: 17 May 00:24:56 ntpd[1954]: Listen normally on 12 calib350a564f8b [fe80::ecee:eeff:feee:eeee%7]:123 May 17 00:24:56.264260 ntpd[1954]: 17 May 00:24:56 ntpd[1954]: Listen normally on 13 cali7f3eae56384 [fe80::ecee:eeff:feee:eeee%8]:123 May 17 00:24:56.264260 ntpd[1954]: 17 May 00:24:56 ntpd[1954]: Listen normally on 14 vxlan.calico [fe80::6437:f5ff:fe14:a769%9]:123 May 17 00:24:56.264260 ntpd[1954]: 17 May 00:24:56 ntpd[1954]: Listen normally on 15 cali07082215a3c [fe80::ecee:eeff:feee:eeee%12]:123 May 17 00:24:56.264260 ntpd[1954]: 17 May 00:24:56 ntpd[1954]: Listen normally on 16 cali0950082a7f3 [fe80::ecee:eeff:feee:eeee%13]:123 May 17 00:24:56.264260 ntpd[1954]: 17 May 00:24:56 ntpd[1954]: Listen normally on 17 cali5ffcb7f1eb8 [fe80::ecee:eeff:feee:eeee%14]:123 May 17 00:24:56.263065 ntpd[1954]: Listen normally on 10 cali6ea7b627f8b [fe80::ecee:eeff:feee:eeee%5]:123 May 17 00:24:56.263094 ntpd[1954]: Listen normally on 11 calid3525e8a027 [fe80::ecee:eeff:feee:eeee%6]:123 May 17 00:24:56.263121 ntpd[1954]: Listen normally on 12 calib350a564f8b [fe80::ecee:eeff:feee:eeee%7]:123 May 17 00:24:56.263149 ntpd[1954]: Listen normally on 13 cali7f3eae56384 [fe80::ecee:eeff:feee:eeee%8]:123 May 17 00:24:56.263177 ntpd[1954]: Listen normally on 14 vxlan.calico [fe80::6437:f5ff:fe14:a769%9]:123 May 17 00:24:56.263210 ntpd[1954]: Listen normally on 15 cali07082215a3c [fe80::ecee:eeff:feee:eeee%12]:123 May 17 00:24:56.263239 ntpd[1954]: Listen normally on 16 cali0950082a7f3 [fe80::ecee:eeff:feee:eeee%13]:123 May 17 00:24:56.263266 ntpd[1954]: Listen normally on 17 cali5ffcb7f1eb8 [fe80::ecee:eeff:feee:eeee%14]:123 May 17 00:24:57.272364 kubelet[3161]: I0517 00:24:57.272059 3161 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:57.493069 kubelet[3161]: I0517 00:24:57.492990 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58f54d8566-6bhlt" podStartSLOduration=28.853666473 podStartE2EDuration="33.492967756s" podCreationTimestamp="2025-05-17 00:24:24 +0000 UTC" firstStartedPulling="2025-05-17 00:24:48.938665397 +0000 UTC m=+43.281126031" lastFinishedPulling="2025-05-17 00:24:53.577966681 +0000 UTC m=+47.920427314" observedRunningTime="2025-05-17 00:24:54.253128457 +0000 UTC m=+48.595589110" watchObservedRunningTime="2025-05-17 00:24:57.492967756 +0000 UTC m=+51.835428409" May 17 00:24:58.307193 containerd[1976]: time="2025-05-17T00:24:58.307136385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:58.308957 containerd[1976]: time="2025-05-17T00:24:58.308898312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:24:58.311456 containerd[1976]: time="2025-05-17T00:24:58.311426747Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:58.314846 containerd[1976]: time="2025-05-17T00:24:58.314785131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:58.315479 containerd[1976]: time="2025-05-17T00:24:58.315342853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.177407612s" May 17 00:24:58.315479 containerd[1976]: time="2025-05-17T00:24:58.315376153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:24:58.316693 containerd[1976]: time="2025-05-17T00:24:58.316670972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:24:58.319299 containerd[1976]: time="2025-05-17T00:24:58.319229542Z" level=info msg="CreateContainer within sandbox \"63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:24:58.343592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747269152.mount: Deactivated successfully. May 17 00:24:58.348877 containerd[1976]: time="2025-05-17T00:24:58.348826035Z" level=info msg="CreateContainer within sandbox \"63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9115e316c5edf533a5423a120864501780ff7824bf4326b131697ee300b90abe\"" May 17 00:24:58.349765 containerd[1976]: time="2025-05-17T00:24:58.349609029Z" level=info msg="StartContainer for \"9115e316c5edf533a5423a120864501780ff7824bf4326b131697ee300b90abe\"" May 17 00:24:58.414863 systemd[1]: Started cri-containerd-9115e316c5edf533a5423a120864501780ff7824bf4326b131697ee300b90abe.scope - libcontainer container 9115e316c5edf533a5423a120864501780ff7824bf4326b131697ee300b90abe. May 17 00:24:58.457154 containerd[1976]: time="2025-05-17T00:24:58.457117518Z" level=info msg="StartContainer for \"9115e316c5edf533a5423a120864501780ff7824bf4326b131697ee300b90abe\" returns successfully" May 17 00:24:58.664724 containerd[1976]: time="2025-05-17T00:24:58.663150342Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:58.666268 containerd[1976]: time="2025-05-17T00:24:58.666234054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:24:58.668095 containerd[1976]: time="2025-05-17T00:24:58.668068544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 351.370803ms" May 17 00:24:58.668192 containerd[1976]: time="2025-05-17T00:24:58.668178944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:24:58.669154 containerd[1976]: time="2025-05-17T00:24:58.669136853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:24:58.673619 containerd[1976]: time="2025-05-17T00:24:58.673586745Z" level=info msg="CreateContainer within sandbox \"6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:24:58.717484 containerd[1976]: time="2025-05-17T00:24:58.717344317Z" level=info msg="CreateContainer within sandbox \"6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4404840d1f53d5d4a18a1dddd6cab5b7d043f8ccae4dcaf5f84d0cf3e7873a2e\"" May 17 00:24:58.719570 containerd[1976]: time="2025-05-17T00:24:58.719526596Z" level=info msg="StartContainer for \"4404840d1f53d5d4a18a1dddd6cab5b7d043f8ccae4dcaf5f84d0cf3e7873a2e\"" May 17 00:24:58.807732 systemd[1]: Started cri-containerd-4404840d1f53d5d4a18a1dddd6cab5b7d043f8ccae4dcaf5f84d0cf3e7873a2e.scope - libcontainer container 4404840d1f53d5d4a18a1dddd6cab5b7d043f8ccae4dcaf5f84d0cf3e7873a2e. May 17 00:24:58.877388 containerd[1976]: time="2025-05-17T00:24:58.877337103Z" level=info msg="StartContainer for \"4404840d1f53d5d4a18a1dddd6cab5b7d043f8ccae4dcaf5f84d0cf3e7873a2e\" returns successfully" May 17 00:24:59.291171 kubelet[3161]: I0517 00:24:59.290853 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8649d85dd-zpwmv" podStartSLOduration=31.893339608 podStartE2EDuration="38.29082936s" podCreationTimestamp="2025-05-17 00:24:21 +0000 UTC" firstStartedPulling="2025-05-17 00:24:52.271408443 +0000 UTC m=+46.613869086" lastFinishedPulling="2025-05-17 00:24:58.668898191 +0000 UTC m=+53.011358838" observedRunningTime="2025-05-17 00:24:59.288474746 +0000 UTC m=+53.630935411" watchObservedRunningTime="2025-05-17 00:24:59.29082936 +0000 UTC m=+53.633290016" May 17 00:24:59.322172 kubelet[3161]: I0517 00:24:59.321173 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8649d85dd-rkbxs" podStartSLOduration=29.118919192 podStartE2EDuration="38.321151635s" podCreationTimestamp="2025-05-17 00:24:21 +0000 UTC" firstStartedPulling="2025-05-17 00:24:49.1140937 +0000 UTC m=+43.456554345" lastFinishedPulling="2025-05-17 00:24:58.316326144 +0000 UTC m=+52.658786788" observedRunningTime="2025-05-17 00:24:59.318328731 +0000 UTC m=+53.660789384" watchObservedRunningTime="2025-05-17 00:24:59.321151635 +0000 UTC m=+53.663612291" May 17 00:24:59.343560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1451696929.mount: Deactivated successfully. May 17 00:25:00.276671 kubelet[3161]: I0517 00:25:00.276632 3161 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:25:00.277026 kubelet[3161]: I0517 00:25:00.277008 3161 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:25:00.856112 systemd[1]: Started sshd@9-172.31.18.208:22-147.75.109.163:46276.service - OpenSSH per-connection server daemon (147.75.109.163:46276). May 17 00:25:01.119581 sshd[5898]: Accepted publickey for core from 147.75.109.163 port 46276 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:01.125132 sshd[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:01.130556 containerd[1976]: time="2025-05-17T00:25:01.130387665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:01.130947 systemd-logind[1960]: New session 10 of user core. May 17 00:25:01.134589 containerd[1976]: time="2025-05-17T00:25:01.134515835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:25:01.137779 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:25:01.139173 containerd[1976]: time="2025-05-17T00:25:01.138984491Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:01.143902 containerd[1976]: time="2025-05-17T00:25:01.143182306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:01.144360 containerd[1976]: time="2025-05-17T00:25:01.143981874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.473833901s" May 17 00:25:01.144360 containerd[1976]: time="2025-05-17T00:25:01.144016371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:25:01.148516 containerd[1976]: time="2025-05-17T00:25:01.148442085Z" level=info msg="CreateContainer within sandbox \"32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:25:01.174989 containerd[1976]: time="2025-05-17T00:25:01.174886591Z" level=info msg="CreateContainer within sandbox \"32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"943d027ec51f7fa3cfc0f7c7b37095874693dae8e86b07b9d8827f1abd002423\"" May 17 00:25:01.176802 containerd[1976]: time="2025-05-17T00:25:01.175645064Z" level=info msg="StartContainer for \"943d027ec51f7fa3cfc0f7c7b37095874693dae8e86b07b9d8827f1abd002423\"" May 17 00:25:01.248785 systemd[1]: Started cri-containerd-943d027ec51f7fa3cfc0f7c7b37095874693dae8e86b07b9d8827f1abd002423.scope - libcontainer container 943d027ec51f7fa3cfc0f7c7b37095874693dae8e86b07b9d8827f1abd002423. May 17 00:25:01.309679 containerd[1976]: time="2025-05-17T00:25:01.308731659Z" level=info msg="StartContainer for \"943d027ec51f7fa3cfc0f7c7b37095874693dae8e86b07b9d8827f1abd002423\" returns successfully" May 17 00:25:01.802285 containerd[1976]: time="2025-05-17T00:25:01.802246030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:25:01.904852 sshd[5898]: pam_unix(sshd:session): session closed for user core May 17 00:25:01.910236 systemd[1]: sshd@9-172.31.18.208:22-147.75.109.163:46276.service: Deactivated successfully. May 17 00:25:01.913136 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:25:01.913907 systemd-logind[1960]: Session 10 logged out. Waiting for processes to exit. May 17 00:25:01.915447 systemd-logind[1960]: Removed session 10. May 17 00:25:01.938595 systemd[1]: Started sshd@10-172.31.18.208:22-147.75.109.163:46286.service - OpenSSH per-connection server daemon (147.75.109.163:46286). May 17 00:25:02.020802 containerd[1976]: time="2025-05-17T00:25:02.020729558Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:02.024546 containerd[1976]: time="2025-05-17T00:25:02.024391987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:02.024546 containerd[1976]: time="2025-05-17T00:25:02.024481396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:25:02.074730 kubelet[3161]: E0517 00:25:02.048076 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:25:02.084052 kubelet[3161]: E0517 00:25:02.083950 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:25:02.106211 kubelet[3161]: E0517 00:25:02.105711 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b29552b59a2b4980bc180c562b9beff2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4jmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d96dfd79b-fl892_calico-system(9e29c649-bade-4daa-bb31-67432210eca8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:02.109665 containerd[1976]: time="2025-05-17T00:25:02.109471181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:25:02.125065 sshd[5951]: Accepted publickey for core from 147.75.109.163 port 46286 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:02.128205 sshd[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:02.136982 systemd-logind[1960]: New session 11 of user core. May 17 00:25:02.141712 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:25:02.308626 containerd[1976]: time="2025-05-17T00:25:02.308441202Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:02.311277 containerd[1976]: time="2025-05-17T00:25:02.310931863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:02.311277 containerd[1976]: time="2025-05-17T00:25:02.310987001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:25:02.312237 kubelet[3161]: E0517 00:25:02.312202 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:25:02.312855 kubelet[3161]: E0517 00:25:02.312403 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:25:02.312855 kubelet[3161]: E0517 00:25:02.312507 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4jmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d96dfd79b-fl892_calico-system(9e29c649-bade-4daa-bb31-67432210eca8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:02.367301 kubelet[3161]: E0517 00:25:02.366756 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-d96dfd79b-fl892" podUID="9e29c649-bade-4daa-bb31-67432210eca8" May 17 00:25:02.434941 sshd[5951]: pam_unix(sshd:session): session closed for user core May 17 00:25:02.439042 systemd-logind[1960]: Session 11 logged out. Waiting for processes to exit. May 17 00:25:02.439338 systemd[1]: sshd@10-172.31.18.208:22-147.75.109.163:46286.service: Deactivated successfully. May 17 00:25:02.442402 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:25:02.447701 systemd-logind[1960]: Removed session 11. May 17 00:25:02.471893 systemd[1]: Started sshd@11-172.31.18.208:22-147.75.109.163:46288.service - OpenSSH per-connection server daemon (147.75.109.163:46288). May 17 00:25:02.524207 kubelet[3161]: I0517 00:25:02.511053 3161 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7knxl" podStartSLOduration=26.420492285 podStartE2EDuration="38.511034776s" podCreationTimestamp="2025-05-17 00:24:24 +0000 UTC" firstStartedPulling="2025-05-17 00:24:49.054666689 +0000 UTC m=+43.397127329" lastFinishedPulling="2025-05-17 00:25:01.14520919 +0000 UTC m=+55.487669820" observedRunningTime="2025-05-17 00:25:02.510817547 +0000 UTC m=+56.853278197" watchObservedRunningTime="2025-05-17 00:25:02.511034776 +0000 UTC m=+56.853495428" May 17 00:25:02.550121 kubelet[3161]: I0517 00:25:02.540938 3161 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:25:02.555456 kubelet[3161]: I0517 00:25:02.555330 3161 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:25:02.663897 sshd[5965]: Accepted publickey for core from 147.75.109.163 port 46288 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:02.664590 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:02.671182 systemd-logind[1960]: New session 12 of user core. May 17 00:25:02.677772 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:25:02.901177 sshd[5965]: pam_unix(sshd:session): session closed for user core May 17 00:25:02.910225 systemd[1]: sshd@11-172.31.18.208:22-147.75.109.163:46288.service: Deactivated successfully. May 17 00:25:02.911990 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:25:02.913479 systemd-logind[1960]: Session 12 logged out. Waiting for processes to exit. May 17 00:25:02.914296 systemd-logind[1960]: Removed session 12. May 17 00:25:03.789312 containerd[1976]: time="2025-05-17T00:25:03.789280181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:25:04.019508 containerd[1976]: time="2025-05-17T00:25:04.019459699Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:04.021607 containerd[1976]: time="2025-05-17T00:25:04.021560856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:04.021781 containerd[1976]: time="2025-05-17T00:25:04.021573767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:25:04.021817 kubelet[3161]: E0517 00:25:04.021781 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:25:04.022181 kubelet[3161]: E0517 00:25:04.021827 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:25:04.025038 kubelet[3161]: E0517 00:25:04.024959 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcgrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-w4ggj_calico-system(3edbec67-a280-4b9a-b567-9942c66f18d0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:04.026190 kubelet[3161]: E0517 00:25:04.026149 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:25:06.009931 containerd[1976]: time="2025-05-17T00:25:06.009893095Z" level=info msg="StopPodSandbox for \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\"" May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.496 [WARNING][5987] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" WorkloadEndpoint="ip--172--31--18--208-k8s-whisker--56d5b74c78--d7rc9-eth0" May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.503 [INFO][5987] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.503 [INFO][5987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" iface="eth0" netns="" May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.503 [INFO][5987] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.503 [INFO][5987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.916 [INFO][5996] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" HandleID="k8s-pod-network.828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" Workload="ip--172--31--18--208-k8s-whisker--56d5b74c78--d7rc9-eth0" May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.924 [INFO][5996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.925 [INFO][5996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.944 [WARNING][5996] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" HandleID="k8s-pod-network.828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" Workload="ip--172--31--18--208-k8s-whisker--56d5b74c78--d7rc9-eth0" May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.944 [INFO][5996] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" HandleID="k8s-pod-network.828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" Workload="ip--172--31--18--208-k8s-whisker--56d5b74c78--d7rc9-eth0" May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.945 [INFO][5996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:06.949491 containerd[1976]: 2025-05-17 00:25:06.947 [INFO][5987] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:25:06.951205 containerd[1976]: time="2025-05-17T00:25:06.949542677Z" level=info msg="TearDown network for sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\" successfully" May 17 00:25:06.951205 containerd[1976]: time="2025-05-17T00:25:06.949565204Z" level=info msg="StopPodSandbox for \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\" returns successfully" May 17 00:25:07.076757 containerd[1976]: time="2025-05-17T00:25:07.076704094Z" level=info msg="RemovePodSandbox for \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\"" May 17 00:25:07.081897 containerd[1976]: time="2025-05-17T00:25:07.081613352Z" level=info msg="Forcibly stopping sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\"" May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.128 [WARNING][6015] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" WorkloadEndpoint="ip--172--31--18--208-k8s-whisker--56d5b74c78--d7rc9-eth0" May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.128 [INFO][6015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.128 [INFO][6015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" iface="eth0" netns="" May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.128 [INFO][6015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.128 [INFO][6015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.149 [INFO][6022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" HandleID="k8s-pod-network.828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" Workload="ip--172--31--18--208-k8s-whisker--56d5b74c78--d7rc9-eth0" May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.149 [INFO][6022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.149 [INFO][6022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.155 [WARNING][6022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" HandleID="k8s-pod-network.828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" Workload="ip--172--31--18--208-k8s-whisker--56d5b74c78--d7rc9-eth0" May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.155 [INFO][6022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" HandleID="k8s-pod-network.828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" Workload="ip--172--31--18--208-k8s-whisker--56d5b74c78--d7rc9-eth0" May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.157 [INFO][6022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:07.163606 containerd[1976]: 2025-05-17 00:25:07.160 [INFO][6015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84" May 17 00:25:07.164683 containerd[1976]: time="2025-05-17T00:25:07.163653336Z" level=info msg="TearDown network for sandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\" successfully" May 17 00:25:07.182251 containerd[1976]: time="2025-05-17T00:25:07.182199259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:25:07.192065 containerd[1976]: time="2025-05-17T00:25:07.192014786Z" level=info msg="RemovePodSandbox \"828740e88e22b220f0c9333b94ead3693b3f002da13e1dd9fdb5534b31a7bf84\" returns successfully" May 17 00:25:07.256584 containerd[1976]: time="2025-05-17T00:25:07.255824922Z" level=info msg="StopPodSandbox for \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\"" May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.311 [WARNING][6036] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0", GenerateName:"calico-apiserver-8649d85dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"891b95b4-9f23-4ca3-aa2b-1578acf454d2", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8649d85dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a", Pod:"calico-apiserver-8649d85dd-zpwmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ffcb7f1eb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.312 [INFO][6036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.312 [INFO][6036] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" iface="eth0" netns="" May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.312 [INFO][6036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.312 [INFO][6036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.340 [INFO][6043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" HandleID="k8s-pod-network.49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.341 [INFO][6043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.341 [INFO][6043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.346 [WARNING][6043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" HandleID="k8s-pod-network.49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.346 [INFO][6043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" HandleID="k8s-pod-network.49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.347 [INFO][6043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:07.351650 containerd[1976]: 2025-05-17 00:25:07.349 [INFO][6036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:25:07.352917 containerd[1976]: time="2025-05-17T00:25:07.351953591Z" level=info msg="TearDown network for sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\" successfully" May 17 00:25:07.352917 containerd[1976]: time="2025-05-17T00:25:07.351979998Z" level=info msg="StopPodSandbox for \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\" returns successfully" May 17 00:25:07.352917 containerd[1976]: time="2025-05-17T00:25:07.352595410Z" level=info msg="RemovePodSandbox for \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\"" May 17 00:25:07.353278 containerd[1976]: time="2025-05-17T00:25:07.353017130Z" level=info msg="Forcibly stopping sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\"" May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.391 [WARNING][6057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0", GenerateName:"calico-apiserver-8649d85dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"891b95b4-9f23-4ca3-aa2b-1578acf454d2", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8649d85dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"6543616a7f3e892fca19c394d87921519cf7cbb82c375b6c92744ef3d73c076a", Pod:"calico-apiserver-8649d85dd-zpwmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ffcb7f1eb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.392 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.392 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" iface="eth0" netns="" May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.392 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.392 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.414 [INFO][6064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" HandleID="k8s-pod-network.49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.414 [INFO][6064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.414 [INFO][6064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.420 [WARNING][6064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" HandleID="k8s-pod-network.49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.420 [INFO][6064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" HandleID="k8s-pod-network.49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--zpwmv-eth0" May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.422 [INFO][6064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:07.425738 containerd[1976]: 2025-05-17 00:25:07.424 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000" May 17 00:25:07.427514 containerd[1976]: time="2025-05-17T00:25:07.425787558Z" level=info msg="TearDown network for sandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\" successfully" May 17 00:25:07.432727 containerd[1976]: time="2025-05-17T00:25:07.432691752Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:25:07.432805 containerd[1976]: time="2025-05-17T00:25:07.432753775Z" level=info msg="RemovePodSandbox \"49d45db6f5c960ebc985ffd915746e0a468c9c9460a804abc0971b2fdab7f000\" returns successfully" May 17 00:25:07.437283 containerd[1976]: time="2025-05-17T00:25:07.437247168Z" level=info msg="StopPodSandbox for \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\"" May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.476 [WARNING][6079] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0", GenerateName:"calico-apiserver-8649d85dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddf692d9-2f7b-48c5-85a5-b8c1de84fd75", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8649d85dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767", Pod:"calico-apiserver-8649d85dd-rkbxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f3eae56384", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.476 [INFO][6079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.476 [INFO][6079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" iface="eth0" netns="" May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.476 [INFO][6079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.476 [INFO][6079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.497 [INFO][6087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" HandleID="k8s-pod-network.e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.498 [INFO][6087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.498 [INFO][6087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.504 [WARNING][6087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" HandleID="k8s-pod-network.e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.504 [INFO][6087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" HandleID="k8s-pod-network.e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.506 [INFO][6087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:07.509710 containerd[1976]: 2025-05-17 00:25:07.507 [INFO][6079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:25:07.509710 containerd[1976]: time="2025-05-17T00:25:07.509665660Z" level=info msg="TearDown network for sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\" successfully" May 17 00:25:07.509710 containerd[1976]: time="2025-05-17T00:25:07.509697439Z" level=info msg="StopPodSandbox for \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\" returns successfully" May 17 00:25:07.511795 containerd[1976]: time="2025-05-17T00:25:07.510744417Z" level=info msg="RemovePodSandbox for \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\"" May 17 00:25:07.511795 containerd[1976]: time="2025-05-17T00:25:07.510782360Z" level=info msg="Forcibly stopping sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\"" May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.546 [WARNING][6101] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0", GenerateName:"calico-apiserver-8649d85dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddf692d9-2f7b-48c5-85a5-b8c1de84fd75", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8649d85dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"63f9b07c45dbd22617a94994f093e3caaf62f21ee1be08d1c27c4ffe7c451767", Pod:"calico-apiserver-8649d85dd-rkbxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f3eae56384", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.546 [INFO][6101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.546 [INFO][6101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" iface="eth0" netns="" May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.546 [INFO][6101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.547 [INFO][6101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.570 [INFO][6108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" HandleID="k8s-pod-network.e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.570 [INFO][6108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.570 [INFO][6108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.576 [WARNING][6108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" HandleID="k8s-pod-network.e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.576 [INFO][6108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" HandleID="k8s-pod-network.e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" Workload="ip--172--31--18--208-k8s-calico--apiserver--8649d85dd--rkbxs-eth0" May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.578 [INFO][6108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:07.581699 containerd[1976]: 2025-05-17 00:25:07.580 [INFO][6101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b" May 17 00:25:07.582639 containerd[1976]: time="2025-05-17T00:25:07.581733809Z" level=info msg="TearDown network for sandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\" successfully" May 17 00:25:07.587622 containerd[1976]: time="2025-05-17T00:25:07.587563491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:25:07.587848 containerd[1976]: time="2025-05-17T00:25:07.587637695Z" level=info msg="RemovePodSandbox \"e0a3cf9741e41e04d4f24e462fa991f7c7b6f273ac88facde723559b876d4f3b\" returns successfully" May 17 00:25:07.588198 containerd[1976]: time="2025-05-17T00:25:07.588115356Z" level=info msg="StopPodSandbox for \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\"" May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.623 [WARNING][6122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0", GenerateName:"calico-kube-controllers-58f54d8566-", Namespace:"calico-system", SelfLink:"", UID:"4c7db054-059f-46a4-9fc7-ca1358ceaf57", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58f54d8566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32", Pod:"calico-kube-controllers-58f54d8566-6bhlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3525e8a027", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.623 [INFO][6122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.623 [INFO][6122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" iface="eth0" netns="" May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.623 [INFO][6122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.623 [INFO][6122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.645 [INFO][6129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" HandleID="k8s-pod-network.9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.645 [INFO][6129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.645 [INFO][6129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.651 [WARNING][6129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" HandleID="k8s-pod-network.9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.651 [INFO][6129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" HandleID="k8s-pod-network.9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.654 [INFO][6129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:07.657820 containerd[1976]: 2025-05-17 00:25:07.656 [INFO][6122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:25:07.658479 containerd[1976]: time="2025-05-17T00:25:07.657857510Z" level=info msg="TearDown network for sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\" successfully" May 17 00:25:07.658479 containerd[1976]: time="2025-05-17T00:25:07.657880936Z" level=info msg="StopPodSandbox for \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\" returns successfully" May 17 00:25:07.658479 containerd[1976]: time="2025-05-17T00:25:07.658385756Z" level=info msg="RemovePodSandbox for \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\"" May 17 00:25:07.658479 containerd[1976]: time="2025-05-17T00:25:07.658410684Z" level=info msg="Forcibly stopping sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\"" May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.698 [WARNING][6143] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0", GenerateName:"calico-kube-controllers-58f54d8566-", Namespace:"calico-system", SelfLink:"", UID:"4c7db054-059f-46a4-9fc7-ca1358ceaf57", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58f54d8566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"fb324f8285e3c4975f9489bbc2366c497b19bde2d61915107c350c82101cfe32", Pod:"calico-kube-controllers-58f54d8566-6bhlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3525e8a027", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.698 [INFO][6143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.698 [INFO][6143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" iface="eth0" netns="" May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.698 [INFO][6143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.698 [INFO][6143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.722 [INFO][6150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" HandleID="k8s-pod-network.9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.722 [INFO][6150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.722 [INFO][6150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.729 [WARNING][6150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" HandleID="k8s-pod-network.9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.729 [INFO][6150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" HandleID="k8s-pod-network.9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" Workload="ip--172--31--18--208-k8s-calico--kube--controllers--58f54d8566--6bhlt-eth0" May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.730 [INFO][6150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:07.734131 containerd[1976]: 2025-05-17 00:25:07.732 [INFO][6143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56" May 17 00:25:07.735846 containerd[1976]: time="2025-05-17T00:25:07.734172897Z" level=info msg="TearDown network for sandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\" successfully" May 17 00:25:07.739854 containerd[1976]: time="2025-05-17T00:25:07.739798635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:25:07.740489 containerd[1976]: time="2025-05-17T00:25:07.739867003Z" level=info msg="RemovePodSandbox \"9aa7443a6757cc6e491cc068d74870888fae8fc1cfb20af3017df96d9c5c6a56\" returns successfully" May 17 00:25:07.759584 containerd[1976]: time="2025-05-17T00:25:07.759522628Z" level=info msg="StopPodSandbox for \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\"" May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.793 [WARNING][6164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"154c5300-472e-444e-8595-31315d3f4aee", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c", Pod:"csi-node-driver-7knxl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib350a564f8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.794 [INFO][6164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.794 [INFO][6164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" iface="eth0" netns="" May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.794 [INFO][6164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.794 [INFO][6164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.820 [INFO][6171] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" HandleID="k8s-pod-network.d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.820 [INFO][6171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.820 [INFO][6171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.826 [WARNING][6171] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" HandleID="k8s-pod-network.d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.826 [INFO][6171] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" HandleID="k8s-pod-network.d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.828 [INFO][6171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:07.831988 containerd[1976]: 2025-05-17 00:25:07.830 [INFO][6164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:25:07.831988 containerd[1976]: time="2025-05-17T00:25:07.831961842Z" level=info msg="TearDown network for sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\" successfully" May 17 00:25:07.831988 containerd[1976]: time="2025-05-17T00:25:07.831983993Z" level=info msg="StopPodSandbox for \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\" returns successfully" May 17 00:25:07.834403 containerd[1976]: time="2025-05-17T00:25:07.832472893Z" level=info msg="RemovePodSandbox for \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\"" May 17 00:25:07.834403 containerd[1976]: time="2025-05-17T00:25:07.832498341Z" level=info msg="Forcibly stopping sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\"" May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.872 [WARNING][6185] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"154c5300-472e-444e-8595-31315d3f4aee", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"32921fc6e992cc47cbc19bee0a7688389dd394c1729071823e912f91d601d80c", Pod:"csi-node-driver-7knxl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib350a564f8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.872 [INFO][6185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.872 [INFO][6185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" iface="eth0" netns="" May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.872 [INFO][6185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.872 [INFO][6185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.897 [INFO][6192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" HandleID="k8s-pod-network.d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.897 [INFO][6192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.897 [INFO][6192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.903 [WARNING][6192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" HandleID="k8s-pod-network.d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.903 [INFO][6192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" HandleID="k8s-pod-network.d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" Workload="ip--172--31--18--208-k8s-csi--node--driver--7knxl-eth0" May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.904 [INFO][6192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:07.908602 containerd[1976]: 2025-05-17 00:25:07.906 [INFO][6185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1" May 17 00:25:07.910061 containerd[1976]: time="2025-05-17T00:25:07.908652980Z" level=info msg="TearDown network for sandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\" successfully" May 17 00:25:07.914866 containerd[1976]: time="2025-05-17T00:25:07.914821060Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:25:07.914972 containerd[1976]: time="2025-05-17T00:25:07.914895976Z" level=info msg="RemovePodSandbox \"d5857f7486c2aef4006df0775fca7977f04103586d387fc0e3d676b6c725dea1\" returns successfully" May 17 00:25:07.915466 containerd[1976]: time="2025-05-17T00:25:07.915439532Z" level=info msg="StopPodSandbox for \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\"" May 17 00:25:07.942681 systemd[1]: Started sshd@12-172.31.18.208:22-147.75.109.163:46294.service - OpenSSH per-connection server daemon (147.75.109.163:46294). May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:07.985 [WARNING][6206] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9cde766c-cf7a-4494-a1ab-ccbb03aa389f", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52", Pod:"coredns-668d6bf9bc-66bvn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0950082a7f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:07.985 [INFO][6206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:07.985 [INFO][6206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" iface="eth0" netns="" May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:07.985 [INFO][6206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:07.985 [INFO][6206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:08.018 [INFO][6215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" HandleID="k8s-pod-network.1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:08.018 [INFO][6215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:08.018 [INFO][6215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:08.031 [WARNING][6215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" HandleID="k8s-pod-network.1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:08.031 [INFO][6215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" HandleID="k8s-pod-network.1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:08.034 [INFO][6215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:08.041391 containerd[1976]: 2025-05-17 00:25:08.038 [INFO][6206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:25:08.043488 containerd[1976]: time="2025-05-17T00:25:08.042508069Z" level=info msg="TearDown network for sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\" successfully" May 17 00:25:08.043488 containerd[1976]: time="2025-05-17T00:25:08.042563494Z" level=info msg="StopPodSandbox for \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\" returns successfully" May 17 00:25:08.045150 containerd[1976]: time="2025-05-17T00:25:08.044186466Z" level=info msg="RemovePodSandbox for \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\"" May 17 00:25:08.045150 containerd[1976]: time="2025-05-17T00:25:08.044232153Z" level=info msg="Forcibly stopping sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\"" May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.121 [WARNING][6231] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9cde766c-cf7a-4494-a1ab-ccbb03aa389f", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"fa775ff22f05035f753b7cd3f2e04137d42253af7125da1c617ef01276772f52", Pod:"coredns-668d6bf9bc-66bvn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0950082a7f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.122 [INFO][6231] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.122 [INFO][6231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" iface="eth0" netns="" May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.122 [INFO][6231] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.122 [INFO][6231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.175 [INFO][6238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" HandleID="k8s-pod-network.1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.176 [INFO][6238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.176 [INFO][6238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.191 [WARNING][6238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" HandleID="k8s-pod-network.1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.191 [INFO][6238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" HandleID="k8s-pod-network.1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--66bvn-eth0" May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.195 [INFO][6238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:08.203625 containerd[1976]: 2025-05-17 00:25:08.199 [INFO][6231] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1" May 17 00:25:08.208773 containerd[1976]: time="2025-05-17T00:25:08.203612822Z" level=info msg="TearDown network for sandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\" successfully" May 17 00:25:08.210802 containerd[1976]: time="2025-05-17T00:25:08.210733185Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:25:08.210893 containerd[1976]: time="2025-05-17T00:25:08.210809921Z" level=info msg="RemovePodSandbox \"1f9c668a84fb512f058c0fde06c3f87d15ebb98dcae7ee6e0449ae0adeef2af1\" returns successfully" May 17 00:25:08.233475 sshd[6211]: Accepted publickey for core from 147.75.109.163 port 46294 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:08.239701 sshd[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:08.249642 systemd-logind[1960]: New session 13 of user core. May 17 00:25:08.255805 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:25:08.261946 containerd[1976]: time="2025-05-17T00:25:08.261852153Z" level=info msg="StopPodSandbox for \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\"" May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.335 [WARNING][6256] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"3edbec67-a280-4b9a-b567-9942c66f18d0", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b", Pod:"goldmane-78d55f7ddc-w4ggj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ea7b627f8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.335 [INFO][6256] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.335 [INFO][6256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" iface="eth0" netns="" May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.335 [INFO][6256] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.335 [INFO][6256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.377 [INFO][6263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" HandleID="k8s-pod-network.5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.377 [INFO][6263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.378 [INFO][6263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.386 [WARNING][6263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" HandleID="k8s-pod-network.5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.387 [INFO][6263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" HandleID="k8s-pod-network.5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.392 [INFO][6263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:08.400814 containerd[1976]: 2025-05-17 00:25:08.397 [INFO][6256] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:25:08.401824 containerd[1976]: time="2025-05-17T00:25:08.400833720Z" level=info msg="TearDown network for sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\" successfully" May 17 00:25:08.401824 containerd[1976]: time="2025-05-17T00:25:08.400864274Z" level=info msg="StopPodSandbox for \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\" returns successfully" May 17 00:25:08.427542 kubelet[3161]: I0517 00:25:08.427416 3161 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:25:08.457072 containerd[1976]: time="2025-05-17T00:25:08.454739378Z" level=info msg="RemovePodSandbox for \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\"" May 17 00:25:08.457072 containerd[1976]: time="2025-05-17T00:25:08.454777341Z" level=info msg="Forcibly stopping sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\"" May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.503 [WARNING][6281] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"3edbec67-a280-4b9a-b567-9942c66f18d0", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"7fc7dc3b4c45ffeacceb437ff92aaa78aa4df22468795436c140b601afb0db4b", Pod:"goldmane-78d55f7ddc-w4ggj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ea7b627f8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.504 [INFO][6281] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.504 [INFO][6281] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" iface="eth0" netns="" May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.504 [INFO][6281] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.504 [INFO][6281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.532 [INFO][6291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" HandleID="k8s-pod-network.5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.532 [INFO][6291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.532 [INFO][6291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.542 [WARNING][6291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" HandleID="k8s-pod-network.5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.542 [INFO][6291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" HandleID="k8s-pod-network.5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" Workload="ip--172--31--18--208-k8s-goldmane--78d55f7ddc--w4ggj-eth0" May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.545 [INFO][6291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:08.553481 containerd[1976]: 2025-05-17 00:25:08.548 [INFO][6281] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8" May 17 00:25:08.554026 containerd[1976]: time="2025-05-17T00:25:08.553515864Z" level=info msg="TearDown network for sandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\" successfully" May 17 00:25:08.568146 containerd[1976]: time="2025-05-17T00:25:08.568072370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:25:08.571123 containerd[1976]: time="2025-05-17T00:25:08.568164979Z" level=info msg="RemovePodSandbox \"5a4dc09e1105606b638b92a9266772cf7f2c765a65cf3b6c1aa6a7a95e483fb8\" returns successfully" May 17 00:25:08.678009 containerd[1976]: time="2025-05-17T00:25:08.676710449Z" level=info msg="StopPodSandbox for \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\"" May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.786 [WARNING][6306] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e41e279e-d875-4866-b909-66b33f148bb6", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a", Pod:"coredns-668d6bf9bc-klpmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07082215a3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.786 [INFO][6306] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.786 [INFO][6306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" iface="eth0" netns="" May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.786 [INFO][6306] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.786 [INFO][6306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.833 [INFO][6314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" HandleID="k8s-pod-network.624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.833 [INFO][6314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.833 [INFO][6314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.848 [WARNING][6314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" HandleID="k8s-pod-network.624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.848 [INFO][6314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" HandleID="k8s-pod-network.624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.851 [INFO][6314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:08.856237 containerd[1976]: 2025-05-17 00:25:08.853 [INFO][6306] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:25:08.856954 containerd[1976]: time="2025-05-17T00:25:08.856245838Z" level=info msg="TearDown network for sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\" successfully" May 17 00:25:08.856954 containerd[1976]: time="2025-05-17T00:25:08.856309419Z" level=info msg="StopPodSandbox for \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\" returns successfully" May 17 00:25:08.871005 containerd[1976]: time="2025-05-17T00:25:08.870693325Z" level=info msg="RemovePodSandbox for \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\"" May 17 00:25:08.871005 containerd[1976]: time="2025-05-17T00:25:08.870751993Z" level=info msg="Forcibly stopping sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\"" May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:08.956 [WARNING][6330] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e41e279e-d875-4866-b909-66b33f148bb6", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-208", ContainerID:"66668f70de1d80034dba83cf58f62865d6a3f0c99ee4dd819fd1671eadf7932a", Pod:"coredns-668d6bf9bc-klpmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07082215a3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:08.957 [INFO][6330] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:08.957 [INFO][6330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" iface="eth0" netns="" May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:08.957 [INFO][6330] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:08.957 [INFO][6330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:08.990 [INFO][6337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" HandleID="k8s-pod-network.624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:08.991 [INFO][6337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:08.991 [INFO][6337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:08.999 [WARNING][6337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" HandleID="k8s-pod-network.624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:08.999 [INFO][6337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" HandleID="k8s-pod-network.624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" Workload="ip--172--31--18--208-k8s-coredns--668d6bf9bc--klpmb-eth0" May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:09.000 [INFO][6337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:25:09.004974 containerd[1976]: 2025-05-17 00:25:09.002 [INFO][6330] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73" May 17 00:25:09.006203 containerd[1976]: time="2025-05-17T00:25:09.005020786Z" level=info msg="TearDown network for sandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\" successfully" May 17 00:25:09.011231 containerd[1976]: time="2025-05-17T00:25:09.011031983Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:25:09.011231 containerd[1976]: time="2025-05-17T00:25:09.011145886Z" level=info msg="RemovePodSandbox \"624a38bc29c7fd8e9d674ed6b38a8c3ab261a94dfbad7d340d8aa9eff6802f73\" returns successfully" May 17 00:25:09.115554 sshd[6211]: pam_unix(sshd:session): session closed for user core May 17 00:25:09.120863 systemd[1]: sshd@12-172.31.18.208:22-147.75.109.163:46294.service: Deactivated successfully. May 17 00:25:09.122962 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:25:09.124485 systemd-logind[1960]: Session 13 logged out. Waiting for processes to exit. May 17 00:25:09.125555 systemd-logind[1960]: Removed session 13. May 17 00:25:14.146588 systemd[1]: Started sshd@13-172.31.18.208:22-147.75.109.163:42764.service - OpenSSH per-connection server daemon (147.75.109.163:42764). May 17 00:25:14.346380 sshd[6360]: Accepted publickey for core from 147.75.109.163 port 42764 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:14.347995 sshd[6360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:14.352572 systemd-logind[1960]: New session 14 of user core. May 17 00:25:14.361755 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:25:14.805101 sshd[6360]: pam_unix(sshd:session): session closed for user core May 17 00:25:14.808321 systemd[1]: sshd@13-172.31.18.208:22-147.75.109.163:42764.service: Deactivated successfully. May 17 00:25:14.810082 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:25:14.811457 systemd-logind[1960]: Session 14 logged out. Waiting for processes to exit. May 17 00:25:14.812728 systemd-logind[1960]: Removed session 14. May 17 00:25:17.841193 kubelet[3161]: E0517 00:25:17.835007 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:25:17.851922 kubelet[3161]: E0517 00:25:17.836552 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-d96dfd79b-fl892" podUID="9e29c649-bade-4daa-bb31-67432210eca8" May 17 00:25:18.052130 kubelet[3161]: I0517 00:25:18.050069 3161 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:25:19.839883 systemd[1]: Started sshd@14-172.31.18.208:22-147.75.109.163:47286.service - OpenSSH per-connection server daemon (147.75.109.163:47286). May 17 00:25:20.122405 sshd[6398]: Accepted publickey for core from 147.75.109.163 port 47286 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:20.125147 sshd[6398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:20.132082 systemd-logind[1960]: New session 15 of user core. May 17 00:25:20.136814 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:25:21.477773 sshd[6398]: pam_unix(sshd:session): session closed for user core May 17 00:25:21.482647 systemd-logind[1960]: Session 15 logged out. Waiting for processes to exit. May 17 00:25:21.484933 systemd[1]: sshd@14-172.31.18.208:22-147.75.109.163:47286.service: Deactivated successfully. May 17 00:25:21.489557 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:25:21.495134 systemd-logind[1960]: Removed session 15. May 17 00:25:26.531951 systemd[1]: Started sshd@15-172.31.18.208:22-147.75.109.163:47298.service - OpenSSH per-connection server daemon (147.75.109.163:47298). May 17 00:25:26.760618 sshd[6420]: Accepted publickey for core from 147.75.109.163 port 47298 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:26.763860 sshd[6420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:26.776758 systemd-logind[1960]: New session 16 of user core. May 17 00:25:26.780727 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:25:27.629781 sshd[6420]: pam_unix(sshd:session): session closed for user core May 17 00:25:27.638516 systemd[1]: sshd@15-172.31.18.208:22-147.75.109.163:47298.service: Deactivated successfully. May 17 00:25:27.641068 systemd-logind[1960]: Session 16 logged out. Waiting for processes to exit. May 17 00:25:27.643728 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:25:27.646622 systemd-logind[1960]: Removed session 16. May 17 00:25:27.669959 systemd[1]: Started sshd@16-172.31.18.208:22-147.75.109.163:47302.service - OpenSSH per-connection server daemon (147.75.109.163:47302). May 17 00:25:27.734884 systemd[1]: run-containerd-runc-k8s.io-2c397b5d94fafd034e66a3725c04852e055aceb54abcab31880798f7820aa610-runc.D488Rr.mount: Deactivated successfully. May 17 00:25:27.934424 sshd[6438]: Accepted publickey for core from 147.75.109.163 port 47302 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:27.939219 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:27.946432 systemd-logind[1960]: New session 17 of user core. May 17 00:25:27.952749 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:25:28.759593 sshd[6438]: pam_unix(sshd:session): session closed for user core May 17 00:25:28.771231 systemd[1]: sshd@16-172.31.18.208:22-147.75.109.163:47302.service: Deactivated successfully. May 17 00:25:28.772940 systemd-logind[1960]: Session 17 logged out. Waiting for processes to exit. May 17 00:25:28.775121 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:25:28.792373 systemd-logind[1960]: Removed session 17. May 17 00:25:28.801913 systemd[1]: Started sshd@17-172.31.18.208:22-147.75.109.163:37766.service - OpenSSH per-connection server daemon (147.75.109.163:37766). May 17 00:25:29.041726 containerd[1976]: time="2025-05-17T00:25:29.041069005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:25:29.079358 sshd[6462]: Accepted publickey for core from 147.75.109.163 port 37766 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:29.084395 sshd[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:29.097703 systemd-logind[1960]: New session 18 of user core. May 17 00:25:29.103739 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:25:29.479714 containerd[1976]: time="2025-05-17T00:25:29.479575430Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:29.481857 containerd[1976]: time="2025-05-17T00:25:29.481706926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:29.483334 containerd[1976]: time="2025-05-17T00:25:29.482033715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:25:29.517093 kubelet[3161]: E0517 00:25:29.505789 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:25:29.521014 kubelet[3161]: E0517 00:25:29.520932 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:25:29.555132 kubelet[3161]: E0517 00:25:29.554161 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b29552b59a2b4980bc180c562b9beff2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4jmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d96dfd79b-fl892_calico-system(9e29c649-bade-4daa-bb31-67432210eca8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:29.556489 containerd[1976]: time="2025-05-17T00:25:29.556460854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:25:29.726556 containerd[1976]: time="2025-05-17T00:25:29.726497091Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:29.728809 containerd[1976]: time="2025-05-17T00:25:29.728746312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:29.729030 containerd[1976]: time="2025-05-17T00:25:29.728928681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:25:29.729325 kubelet[3161]: E0517 00:25:29.729292 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:25:29.729493 kubelet[3161]: E0517 00:25:29.729478 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:25:29.730014 kubelet[3161]: E0517 00:25:29.729706 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4jmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d96dfd79b-fl892_calico-system(9e29c649-bade-4daa-bb31-67432210eca8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:29.749072 kubelet[3161]: E0517 00:25:29.748749 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-d96dfd79b-fl892" podUID="9e29c649-bade-4daa-bb31-67432210eca8" May 17 00:25:30.440801 sshd[6462]: pam_unix(sshd:session): session closed for user core May 17 00:25:30.448657 systemd-logind[1960]: Session 18 logged out. Waiting for processes to exit. May 17 00:25:30.452181 systemd[1]: sshd@17-172.31.18.208:22-147.75.109.163:37766.service: Deactivated successfully. May 17 00:25:30.454556 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:25:30.457876 systemd-logind[1960]: Removed session 18. May 17 00:25:30.478866 systemd[1]: Started sshd@18-172.31.18.208:22-147.75.109.163:37778.service - OpenSSH per-connection server daemon (147.75.109.163:37778). May 17 00:25:30.706899 sshd[6483]: Accepted publickey for core from 147.75.109.163 port 37778 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:30.710225 sshd[6483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:30.715358 systemd-logind[1960]: New session 19 of user core. May 17 00:25:30.721112 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:25:32.205140 sshd[6483]: pam_unix(sshd:session): session closed for user core May 17 00:25:32.212180 systemd[1]: sshd@18-172.31.18.208:22-147.75.109.163:37778.service: Deactivated successfully. May 17 00:25:32.216492 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:25:32.220768 systemd-logind[1960]: Session 19 logged out. Waiting for processes to exit. May 17 00:25:32.222983 systemd-logind[1960]: Removed session 19. May 17 00:25:32.246033 systemd[1]: Started sshd@19-172.31.18.208:22-147.75.109.163:37792.service - OpenSSH per-connection server daemon (147.75.109.163:37792). May 17 00:25:32.467401 sshd[6501]: Accepted publickey for core from 147.75.109.163 port 37792 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:32.469301 sshd[6501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:32.475808 systemd-logind[1960]: New session 20 of user core. May 17 00:25:32.481714 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:25:32.741886 sshd[6501]: pam_unix(sshd:session): session closed for user core May 17 00:25:32.748858 systemd-logind[1960]: Session 20 logged out. Waiting for processes to exit. May 17 00:25:32.750050 systemd[1]: sshd@19-172.31.18.208:22-147.75.109.163:37792.service: Deactivated successfully. May 17 00:25:32.752294 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:25:32.754312 systemd-logind[1960]: Removed session 20. May 17 00:25:32.786013 containerd[1976]: time="2025-05-17T00:25:32.785974363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:25:32.959249 containerd[1976]: time="2025-05-17T00:25:32.959090410Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:32.961353 containerd[1976]: time="2025-05-17T00:25:32.961167239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:32.961353 containerd[1976]: time="2025-05-17T00:25:32.961261081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:25:32.962942 kubelet[3161]: E0517 00:25:32.961577 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:25:32.962942 kubelet[3161]: E0517 00:25:32.961625 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:25:32.962942 kubelet[3161]: E0517 00:25:32.961917 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcgrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-w4ggj_calico-system(3edbec67-a280-4b9a-b567-9942c66f18d0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:32.963567 kubelet[3161]: E0517 00:25:32.963499 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:25:37.782826 systemd[1]: Started sshd@20-172.31.18.208:22-147.75.109.163:37808.service - OpenSSH per-connection server daemon (147.75.109.163:37808). May 17 00:25:38.050684 sshd[6519]: Accepted publickey for core from 147.75.109.163 port 37808 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:38.055804 sshd[6519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:38.065460 systemd-logind[1960]: New session 21 of user core. May 17 00:25:38.071767 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:25:38.968150 sshd[6519]: pam_unix(sshd:session): session closed for user core May 17 00:25:38.974295 systemd[1]: sshd@20-172.31.18.208:22-147.75.109.163:37808.service: Deactivated successfully. May 17 00:25:38.976017 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:25:38.976719 systemd-logind[1960]: Session 21 logged out. Waiting for processes to exit. May 17 00:25:38.977634 systemd-logind[1960]: Removed session 21. May 17 00:25:40.777979 kubelet[3161]: E0517 00:25:40.777884 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-d96dfd79b-fl892" podUID="9e29c649-bade-4daa-bb31-67432210eca8" May 17 00:25:44.008725 systemd[1]: Started sshd@21-172.31.18.208:22-147.75.109.163:40844.service - OpenSSH per-connection server daemon (147.75.109.163:40844). May 17 00:25:44.217809 sshd[6536]: Accepted publickey for core from 147.75.109.163 port 40844 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:44.219180 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:44.239928 systemd-logind[1960]: New session 22 of user core. May 17 00:25:44.244633 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:25:44.784065 kubelet[3161]: E0517 00:25:44.784025 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:25:45.179145 sshd[6536]: pam_unix(sshd:session): session closed for user core May 17 00:25:45.185316 systemd[1]: sshd@21-172.31.18.208:22-147.75.109.163:40844.service: Deactivated successfully. May 17 00:25:45.188472 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:25:45.190293 systemd-logind[1960]: Session 22 logged out. Waiting for processes to exit. May 17 00:25:45.192838 systemd-logind[1960]: Removed session 22. May 17 00:25:50.223406 systemd[1]: Started sshd@22-172.31.18.208:22-147.75.109.163:49554.service - OpenSSH per-connection server daemon (147.75.109.163:49554). May 17 00:25:50.465914 sshd[6573]: Accepted publickey for core from 147.75.109.163 port 49554 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:50.470942 sshd[6573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:50.478866 systemd-logind[1960]: New session 23 of user core. May 17 00:25:50.486164 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:25:51.435925 sshd[6573]: pam_unix(sshd:session): session closed for user core May 17 00:25:51.438572 systemd[1]: sshd@22-172.31.18.208:22-147.75.109.163:49554.service: Deactivated successfully. May 17 00:25:51.441476 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:25:51.444149 systemd-logind[1960]: Session 23 logged out. Waiting for processes to exit. May 17 00:25:51.446209 systemd-logind[1960]: Removed session 23. May 17 00:25:54.088454 systemd[1]: run-containerd-runc-k8s.io-2c397b5d94fafd034e66a3725c04852e055aceb54abcab31880798f7820aa610-runc.oQ3auL.mount: Deactivated successfully. May 17 00:25:54.777880 kubelet[3161]: E0517 00:25:54.777260 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-d96dfd79b-fl892" podUID="9e29c649-bade-4daa-bb31-67432210eca8" May 17 00:25:56.469725 systemd[1]: Started sshd@23-172.31.18.208:22-147.75.109.163:49570.service - OpenSSH per-connection server daemon (147.75.109.163:49570). May 17 00:25:56.640051 sshd[6607]: Accepted publickey for core from 147.75.109.163 port 49570 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:56.640813 sshd[6607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:56.648361 systemd-logind[1960]: New session 24 of user core. May 17 00:25:56.652955 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:25:57.690772 sshd[6607]: pam_unix(sshd:session): session closed for user core May 17 00:25:57.695911 systemd[1]: sshd@23-172.31.18.208:22-147.75.109.163:49570.service: Deactivated successfully. May 17 00:25:57.699223 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:25:57.700181 systemd-logind[1960]: Session 24 logged out. Waiting for processes to exit. May 17 00:25:57.701356 systemd-logind[1960]: Removed session 24. May 17 00:25:59.778160 kubelet[3161]: E0517 00:25:59.777986 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:26:02.725866 systemd[1]: Started sshd@24-172.31.18.208:22-147.75.109.163:48702.service - OpenSSH per-connection server daemon (147.75.109.163:48702). May 17 00:26:02.924127 sshd[6642]: Accepted publickey for core from 147.75.109.163 port 48702 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:02.925845 sshd[6642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:02.930658 systemd-logind[1960]: New session 25 of user core. May 17 00:26:02.937713 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:26:03.369077 sshd[6642]: pam_unix(sshd:session): session closed for user core May 17 00:26:03.375797 systemd[1]: sshd@24-172.31.18.208:22-147.75.109.163:48702.service: Deactivated successfully. May 17 00:26:03.377836 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:26:03.379439 systemd-logind[1960]: Session 25 logged out. Waiting for processes to exit. May 17 00:26:03.381219 systemd-logind[1960]: Removed session 25. May 17 00:26:05.803397 kubelet[3161]: E0517 00:26:05.803328 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-d96dfd79b-fl892" podUID="9e29c649-bade-4daa-bb31-67432210eca8" May 17 00:26:14.815886 containerd[1976]: time="2025-05-17T00:26:14.801575535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:26:15.024775 containerd[1976]: time="2025-05-17T00:26:15.024712536Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:15.026814 containerd[1976]: time="2025-05-17T00:26:15.026756472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:15.026935 containerd[1976]: time="2025-05-17T00:26:15.026859231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:26:15.041789 kubelet[3161]: E0517 00:26:15.041711 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:15.045304 kubelet[3161]: E0517 00:26:15.045256 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:15.057123 kubelet[3161]: E0517 00:26:15.057043 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcgrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-w4ggj_calico-system(3edbec67-a280-4b9a-b567-9942c66f18d0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:15.058405 kubelet[3161]: E0517 00:26:15.058354 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:26:17.463633 systemd[1]: cri-containerd-29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da.scope: Deactivated successfully. May 17 00:26:17.464272 systemd[1]: cri-containerd-29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da.scope: Consumed 12.993s CPU time. May 17 00:26:17.673742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da-rootfs.mount: Deactivated successfully. May 17 00:26:17.718874 containerd[1976]: time="2025-05-17T00:26:17.700751787Z" level=info msg="shim disconnected" id=29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da namespace=k8s.io May 17 00:26:17.718874 containerd[1976]: time="2025-05-17T00:26:17.718794209Z" level=warning msg="cleaning up after shim disconnected" id=29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da namespace=k8s.io May 17 00:26:17.718874 containerd[1976]: time="2025-05-17T00:26:17.718817806Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:26:18.015797 kubelet[3161]: I0517 00:26:18.015747 3161 scope.go:117] "RemoveContainer" containerID="29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da" May 17 00:26:18.111231 containerd[1976]: time="2025-05-17T00:26:18.111160262Z" level=info msg="CreateContainer within sandbox \"803e8296431489d593801dfbe3f73a20276d3e43e0fb1ab06f4fece81d25dcd8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 17 00:26:18.223211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234028457.mount: Deactivated successfully. May 17 00:26:18.233797 containerd[1976]: time="2025-05-17T00:26:18.233744976Z" level=info msg="CreateContainer within sandbox \"803e8296431489d593801dfbe3f73a20276d3e43e0fb1ab06f4fece81d25dcd8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8980a5d57fafcf51189e35c9d654dbc8a4fa778cfd7633525f3eb4fb61592e0a\"" May 17 00:26:18.238653 containerd[1976]: time="2025-05-17T00:26:18.238602473Z" level=info msg="StartContainer for \"8980a5d57fafcf51189e35c9d654dbc8a4fa778cfd7633525f3eb4fb61592e0a\"" May 17 00:26:18.281753 systemd[1]: Started cri-containerd-8980a5d57fafcf51189e35c9d654dbc8a4fa778cfd7633525f3eb4fb61592e0a.scope - libcontainer container 8980a5d57fafcf51189e35c9d654dbc8a4fa778cfd7633525f3eb4fb61592e0a. May 17 00:26:18.318229 containerd[1976]: time="2025-05-17T00:26:18.318081436Z" level=info msg="StartContainer for \"8980a5d57fafcf51189e35c9d654dbc8a4fa778cfd7633525f3eb4fb61592e0a\" returns successfully" May 17 00:26:18.561666 kubelet[3161]: E0517 00:26:18.561521 3161 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-208?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 17 00:26:18.622482 systemd[1]: cri-containerd-a1e35264acc7685a0f2728d122927ebc61edeac63efd2dabe9bebc3d91a006d8.scope: Deactivated successfully. May 17 00:26:18.622735 systemd[1]: cri-containerd-a1e35264acc7685a0f2728d122927ebc61edeac63efd2dabe9bebc3d91a006d8.scope: Consumed 3.738s CPU time, 31.3M memory peak, 0B memory swap peak. May 17 00:26:18.645871 containerd[1976]: time="2025-05-17T00:26:18.645794228Z" level=info msg="shim disconnected" id=a1e35264acc7685a0f2728d122927ebc61edeac63efd2dabe9bebc3d91a006d8 namespace=k8s.io May 17 00:26:18.645871 containerd[1976]: time="2025-05-17T00:26:18.645865881Z" level=warning msg="cleaning up after shim disconnected" id=a1e35264acc7685a0f2728d122927ebc61edeac63efd2dabe9bebc3d91a006d8 namespace=k8s.io May 17 00:26:18.645871 containerd[1976]: time="2025-05-17T00:26:18.645876864Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:26:18.672057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1e35264acc7685a0f2728d122927ebc61edeac63efd2dabe9bebc3d91a006d8-rootfs.mount: Deactivated successfully. May 17 00:26:19.007933 kubelet[3161]: I0517 00:26:19.007905 3161 scope.go:117] "RemoveContainer" containerID="a1e35264acc7685a0f2728d122927ebc61edeac63efd2dabe9bebc3d91a006d8" May 17 00:26:19.010267 containerd[1976]: time="2025-05-17T00:26:19.010097150Z" level=info msg="CreateContainer within sandbox \"92ba8af35bb4bcf3c2a64cea2e4aae6ca6dfeead85ad3aadc5b44bfef4da0f0b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:26:19.032750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224921077.mount: Deactivated successfully. May 17 00:26:19.036973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190083612.mount: Deactivated successfully. May 17 00:26:19.042460 containerd[1976]: time="2025-05-17T00:26:19.042409112Z" level=info msg="CreateContainer within sandbox \"92ba8af35bb4bcf3c2a64cea2e4aae6ca6dfeead85ad3aadc5b44bfef4da0f0b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3e50967f6a65fb2e6e8337bdea872b1e60daca6b32bab59a9c7e2941bf8140ff\"" May 17 00:26:19.043127 containerd[1976]: time="2025-05-17T00:26:19.043030253Z" level=info msg="StartContainer for \"3e50967f6a65fb2e6e8337bdea872b1e60daca6b32bab59a9c7e2941bf8140ff\"" May 17 00:26:19.094716 systemd[1]: Started cri-containerd-3e50967f6a65fb2e6e8337bdea872b1e60daca6b32bab59a9c7e2941bf8140ff.scope - libcontainer container 3e50967f6a65fb2e6e8337bdea872b1e60daca6b32bab59a9c7e2941bf8140ff. May 17 00:26:19.147355 containerd[1976]: time="2025-05-17T00:26:19.147284572Z" level=info msg="StartContainer for \"3e50967f6a65fb2e6e8337bdea872b1e60daca6b32bab59a9c7e2941bf8140ff\" returns successfully" May 17 00:26:19.778558 containerd[1976]: time="2025-05-17T00:26:19.778049706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:26:20.001720 containerd[1976]: time="2025-05-17T00:26:20.001669409Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:20.006455 containerd[1976]: time="2025-05-17T00:26:20.006362263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:20.006908 containerd[1976]: time="2025-05-17T00:26:20.006701096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:26:20.007457 kubelet[3161]: E0517 00:26:20.007193 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:20.007457 kubelet[3161]: E0517 00:26:20.007252 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:20.007457 kubelet[3161]: E0517 00:26:20.007383 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b29552b59a2b4980bc180c562b9beff2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4jmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d96dfd79b-fl892_calico-system(9e29c649-bade-4daa-bb31-67432210eca8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:20.010072 containerd[1976]: time="2025-05-17T00:26:20.009717041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:26:20.204216 containerd[1976]: time="2025-05-17T00:26:20.204082503Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:20.207364 containerd[1976]: time="2025-05-17T00:26:20.206662996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:20.207364 containerd[1976]: time="2025-05-17T00:26:20.206786824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:26:20.207593 kubelet[3161]: E0517 00:26:20.207017 3161 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:20.207593 kubelet[3161]: E0517 00:26:20.207120 3161 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:20.207593 kubelet[3161]: E0517 00:26:20.207272 3161 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4jmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d96dfd79b-fl892_calico-system(9e29c649-bade-4daa-bb31-67432210eca8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:20.208770 kubelet[3161]: E0517 00:26:20.208506 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-d96dfd79b-fl892" podUID="9e29c649-bade-4daa-bb31-67432210eca8" May 17 00:26:23.482161 systemd[1]: cri-containerd-6b34d489b8b90c7b8b8dce80b063c5013d2c6e5d9272698b8b712782ade38b48.scope: Deactivated successfully. May 17 00:26:23.482382 systemd[1]: cri-containerd-6b34d489b8b90c7b8b8dce80b063c5013d2c6e5d9272698b8b712782ade38b48.scope: Consumed 2.269s CPU time, 18.1M memory peak, 0B memory swap peak. May 17 00:26:23.511164 containerd[1976]: time="2025-05-17T00:26:23.511010960Z" level=info msg="shim disconnected" id=6b34d489b8b90c7b8b8dce80b063c5013d2c6e5d9272698b8b712782ade38b48 namespace=k8s.io May 17 00:26:23.511517 containerd[1976]: time="2025-05-17T00:26:23.511160993Z" level=warning msg="cleaning up after shim disconnected" id=6b34d489b8b90c7b8b8dce80b063c5013d2c6e5d9272698b8b712782ade38b48 namespace=k8s.io May 17 00:26:23.511517 containerd[1976]: time="2025-05-17T00:26:23.511182814Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:26:23.513035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b34d489b8b90c7b8b8dce80b063c5013d2c6e5d9272698b8b712782ade38b48-rootfs.mount: Deactivated successfully. May 17 00:26:24.026029 kubelet[3161]: I0517 00:26:24.025991 3161 scope.go:117] "RemoveContainer" containerID="6b34d489b8b90c7b8b8dce80b063c5013d2c6e5d9272698b8b712782ade38b48" May 17 00:26:24.028120 containerd[1976]: time="2025-05-17T00:26:24.028082812Z" level=info msg="CreateContainer within sandbox \"d1f9d683bcced7b53fecf9deea02b7d1beccf4fc3c1cb22d3039be6db3bb6ef4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:26:24.049047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4001183115.mount: Deactivated successfully. May 17 00:26:24.051567 containerd[1976]: time="2025-05-17T00:26:24.051493397Z" level=info msg="CreateContainer within sandbox \"d1f9d683bcced7b53fecf9deea02b7d1beccf4fc3c1cb22d3039be6db3bb6ef4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b4d16c429642cf1a58c5374ad5ff54f740a060deb5e497467e1df4aa7d725678\"" May 17 00:26:24.052074 containerd[1976]: time="2025-05-17T00:26:24.052039213Z" level=info msg="StartContainer for \"b4d16c429642cf1a58c5374ad5ff54f740a060deb5e497467e1df4aa7d725678\"" May 17 00:26:24.097752 systemd[1]: Started cri-containerd-b4d16c429642cf1a58c5374ad5ff54f740a060deb5e497467e1df4aa7d725678.scope - libcontainer container b4d16c429642cf1a58c5374ad5ff54f740a060deb5e497467e1df4aa7d725678. May 17 00:26:24.152326 containerd[1976]: time="2025-05-17T00:26:24.152286789Z" level=info msg="StartContainer for \"b4d16c429642cf1a58c5374ad5ff54f740a060deb5e497467e1df4aa7d725678\" returns successfully" May 17 00:26:24.519353 systemd[1]: run-containerd-runc-k8s.io-b4d16c429642cf1a58c5374ad5ff54f740a060deb5e497467e1df4aa7d725678-runc.hg9Flf.mount: Deactivated successfully. May 17 00:26:26.777364 kubelet[3161]: E0517 00:26:26.777318 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-w4ggj" podUID="3edbec67-a280-4b9a-b567-9942c66f18d0" May 17 00:26:28.562001 kubelet[3161]: E0517 00:26:28.561849 3161 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-208?timeout=10s\": context deadline exceeded" May 17 00:26:30.043274 systemd[1]: cri-containerd-8980a5d57fafcf51189e35c9d654dbc8a4fa778cfd7633525f3eb4fb61592e0a.scope: Deactivated successfully. May 17 00:26:30.069746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8980a5d57fafcf51189e35c9d654dbc8a4fa778cfd7633525f3eb4fb61592e0a-rootfs.mount: Deactivated successfully. May 17 00:26:30.084546 containerd[1976]: time="2025-05-17T00:26:30.084446791Z" level=info msg="shim disconnected" id=8980a5d57fafcf51189e35c9d654dbc8a4fa778cfd7633525f3eb4fb61592e0a namespace=k8s.io May 17 00:26:30.084546 containerd[1976]: time="2025-05-17T00:26:30.084522917Z" level=warning msg="cleaning up after shim disconnected" id=8980a5d57fafcf51189e35c9d654dbc8a4fa778cfd7633525f3eb4fb61592e0a namespace=k8s.io May 17 00:26:30.084546 containerd[1976]: time="2025-05-17T00:26:30.084555893Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:26:30.140293 kubelet[3161]: I0517 00:26:30.140206 3161 scope.go:117] "RemoveContainer" containerID="29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da" May 17 00:26:30.140838 kubelet[3161]: I0517 00:26:30.140577 3161 scope.go:117] "RemoveContainer" containerID="8980a5d57fafcf51189e35c9d654dbc8a4fa778cfd7633525f3eb4fb61592e0a" May 17 00:26:30.140838 kubelet[3161]: E0517 00:26:30.140708 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-844669ff44-bssfg_tigera-operator(ac909d2b-6981-4b37-a85b-a5a2163972f1)\"" pod="tigera-operator/tigera-operator-844669ff44-bssfg" podUID="ac909d2b-6981-4b37-a85b-a5a2163972f1" May 17 00:26:30.254254 containerd[1976]: time="2025-05-17T00:26:30.254189071Z" level=info msg="RemoveContainer for \"29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da\"" May 17 00:26:30.270514 containerd[1976]: time="2025-05-17T00:26:30.270448831Z" level=info msg="RemoveContainer for \"29271d25608cb5339c8c8c1cfa4cd48cb8ae95cef2e0a6bae94e65729aea26da\" returns successfully" May 17 00:26:35.777950 kubelet[3161]: E0517 00:26:35.777855 3161 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-d96dfd79b-fl892" podUID="9e29c649-bade-4daa-bb31-67432210eca8"