May 17 00:24:29.895847 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:24:29.895870 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:24:29.895883 kernel: BIOS-provided physical RAM map: May 17 00:24:29.895890 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:24:29.895896 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 17 00:24:29.895902 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 May 17 00:24:29.895910 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved May 17 00:24:29.895917 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 17 00:24:29.895924 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 17 00:24:29.895933 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 17 00:24:29.895940 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 17 00:24:29.895947 kernel: NX (Execute Disable) protection: active May 17 00:24:29.895954 kernel: APIC: Static calls initialized May 17 00:24:29.895961 kernel: efi: EFI v2.7 by EDK II May 17 00:24:29.895970 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 17 00:24:29.895980 kernel: SMBIOS 2.7 present. May 17 00:24:29.895988 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 17 00:24:29.895996 kernel: Hypervisor detected: KVM May 17 00:24:29.896003 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:24:29.896011 kernel: kvm-clock: using sched offset of 4285238975 cycles May 17 00:24:29.896019 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:24:29.896027 kernel: tsc: Detected 2499.996 MHz processor May 17 00:24:29.896035 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:24:29.896043 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:24:29.896051 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 17 00:24:29.896061 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 17 00:24:29.896069 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:24:29.896076 kernel: Using GB pages for direct mapping May 17 00:24:29.896084 kernel: Secure boot disabled May 17 00:24:29.896092 kernel: ACPI: Early table checksum verification disabled May 17 00:24:29.896099 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 17 00:24:29.896107 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 17 00:24:29.896115 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 17 00:24:29.896123 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 17 00:24:29.896133 kernel: ACPI: FACS 0x00000000789D0000 000040 May 17 00:24:29.896141 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 17 00:24:29.896148 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 17 00:24:29.896156 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 17 00:24:29.896163 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 17 00:24:29.896171 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 17 00:24:29.896183 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:24:29.896193 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:24:29.896202 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 17 00:24:29.896210 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 17 00:24:29.896218 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 17 00:24:29.896227 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 17 00:24:29.896235 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 17 00:24:29.896243 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 17 00:24:29.896254 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 17 00:24:29.896262 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 17 00:24:29.896270 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 17 00:24:29.896278 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 17 00:24:29.896286 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 17 00:24:29.896294 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 17 00:24:29.896302 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:24:29.896311 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:24:29.896319 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 17 00:24:29.896330 kernel: NUMA: Initialized distance table, cnt=1 May 17 00:24:29.896338 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 17 00:24:29.896346 kernel: Zone ranges: May 17 00:24:29.896354 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:24:29.896362 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 17 00:24:29.896371 kernel: Normal empty May 17 00:24:29.896379 kernel: Movable zone start for each node May 17 00:24:29.896387 kernel: Early memory node ranges May 17 00:24:29.896395 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:24:29.896405 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 17 00:24:29.896414 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 17 00:24:29.896422 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 17 00:24:29.896430 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:24:29.896438 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:24:29.896446 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 17 00:24:29.896455 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 17 00:24:29.896463 kernel: ACPI: PM-Timer IO Port: 0xb008 May 17 00:24:29.896471 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:24:29.896492 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 17 00:24:29.896504 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:24:29.896512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:24:29.896520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:24:29.896528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:24:29.896537 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:24:29.896545 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:24:29.896553 kernel: TSC deadline timer available May 17 00:24:29.896561 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:24:29.896569 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:24:29.896580 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 17 00:24:29.896588 kernel: Booting paravirtualized kernel on KVM May 17 00:24:29.896597 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:24:29.896605 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:24:29.896613 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:24:29.896621 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:24:29.896629 kernel: pcpu-alloc: [0] 0 1 May 17 00:24:29.896637 kernel: kvm-guest: PV spinlocks enabled May 17 00:24:29.896645 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:24:29.896657 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:24:29.896666 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:24:29.896674 kernel: random: crng init done May 17 00:24:29.896682 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:24:29.896691 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:24:29.896699 kernel: Fallback order for Node 0: 0 May 17 00:24:29.896707 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 17 00:24:29.896715 kernel: Policy zone: DMA32 May 17 00:24:29.896726 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:24:29.896735 kernel: Memory: 1874604K/2037804K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 162940K reserved, 0K cma-reserved) May 17 00:24:29.896743 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:24:29.896751 kernel: Kernel/User page tables isolation: enabled May 17 00:24:29.896760 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:24:29.896768 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:24:29.896776 kernel: Dynamic Preempt: voluntary May 17 00:24:29.896784 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:24:29.896793 kernel: rcu: RCU event tracing is enabled. May 17 00:24:29.896804 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:24:29.896812 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:24:29.896821 kernel: Rude variant of Tasks RCU enabled. May 17 00:24:29.896829 kernel: Tracing variant of Tasks RCU enabled. May 17 00:24:29.896837 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:24:29.896845 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:24:29.896854 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:24:29.896873 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:24:29.896881 kernel: Console: colour dummy device 80x25 May 17 00:24:29.896890 kernel: printk: console [tty0] enabled May 17 00:24:29.896899 kernel: printk: console [ttyS0] enabled May 17 00:24:29.896907 kernel: ACPI: Core revision 20230628 May 17 00:24:29.896918 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 17 00:24:29.896927 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:24:29.896936 kernel: x2apic enabled May 17 00:24:29.896945 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:24:29.896954 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 17 00:24:29.896965 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) May 17 00:24:29.896974 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:24:29.896982 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:24:29.896991 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:24:29.897000 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:24:29.897014 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:24:29.897022 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:24:29.897031 kernel: RETBleed: Vulnerable May 17 00:24:29.897040 kernel: Speculative Store Bypass: Vulnerable May 17 00:24:29.897052 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:24:29.897061 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:24:29.897070 kernel: GDS: Unknown: Dependent on hypervisor status May 17 00:24:29.897078 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:24:29.897087 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:24:29.897096 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:24:29.897105 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 00:24:29.897113 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 00:24:29.897122 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:24:29.897131 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:24:29.897139 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:24:29.897151 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:24:29.897159 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:24:29.897168 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 00:24:29.897177 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 00:24:29.897186 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 17 00:24:29.897194 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 17 00:24:29.897203 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 17 00:24:29.897212 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 17 00:24:29.897220 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 17 00:24:29.897229 kernel: Freeing SMP alternatives memory: 32K May 17 00:24:29.897238 kernel: pid_max: default: 32768 minimum: 301 May 17 00:24:29.897246 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:24:29.897257 kernel: landlock: Up and running. May 17 00:24:29.897266 kernel: SELinux: Initializing. May 17 00:24:29.897275 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:24:29.897284 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:24:29.897293 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:24:29.897301 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:24:29.897310 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:24:29.897319 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:24:29.897328 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:24:29.897337 kernel: signal: max sigframe size: 3632 May 17 00:24:29.897349 kernel: rcu: Hierarchical SRCU implementation. May 17 00:24:29.897358 kernel: rcu: Max phase no-delay instances is 400. May 17 00:24:29.897366 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:24:29.897375 kernel: smp: Bringing up secondary CPUs ... May 17 00:24:29.897384 kernel: smpboot: x86: Booting SMP configuration: May 17 00:24:29.897392 kernel: .... node #0, CPUs: #1 May 17 00:24:29.897402 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 17 00:24:29.897411 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:24:29.897423 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:24:29.897431 kernel: smpboot: Max logical packages: 1 May 17 00:24:29.897440 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) May 17 00:24:29.897449 kernel: devtmpfs: initialized May 17 00:24:29.897458 kernel: x86/mm: Memory block size: 128MB May 17 00:24:29.897467 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 17 00:24:29.897476 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:24:29.897494 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:24:29.897503 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:24:29.897514 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:24:29.897523 kernel: audit: initializing netlink subsys (disabled) May 17 00:24:29.897532 kernel: audit: type=2000 audit(1747441469.547:1): state=initialized audit_enabled=0 res=1 May 17 00:24:29.897541 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:24:29.897550 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:24:29.897558 kernel: cpuidle: using governor menu May 17 00:24:29.897567 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:24:29.897576 kernel: dca service started, version 1.12.1 May 17 00:24:29.897585 kernel: PCI: Using configuration type 1 for base access May 17 00:24:29.897596 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:24:29.897605 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:24:29.897613 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:24:29.897622 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:24:29.897631 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:24:29.897639 kernel: ACPI: Added _OSI(Module Device) May 17 00:24:29.897648 kernel: ACPI: Added _OSI(Processor Device) May 17 00:24:29.897657 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:24:29.897666 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:24:29.897677 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 17 00:24:29.897686 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:24:29.897695 kernel: ACPI: Interpreter enabled May 17 00:24:29.897703 kernel: ACPI: PM: (supports S0 S5) May 17 00:24:29.897712 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:24:29.897721 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:24:29.897730 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:24:29.897739 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:24:29.897747 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:24:29.897901 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:24:29.898000 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 17 00:24:29.898091 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 17 00:24:29.898102 kernel: acpiphp: Slot [3] registered May 17 00:24:29.898111 kernel: acpiphp: Slot [4] registered May 17 00:24:29.898120 kernel: acpiphp: Slot [5] registered May 17 00:24:29.898129 kernel: acpiphp: Slot [6] registered May 17 00:24:29.898138 kernel: acpiphp: Slot [7] registered May 17 00:24:29.898149 kernel: acpiphp: Slot [8] registered May 17 00:24:29.898158 kernel: acpiphp: Slot [9] registered May 17 00:24:29.898166 kernel: acpiphp: Slot [10] registered May 17 00:24:29.898175 kernel: acpiphp: Slot [11] registered May 17 00:24:29.898184 kernel: acpiphp: Slot [12] registered May 17 00:24:29.898192 kernel: acpiphp: Slot [13] registered May 17 00:24:29.898201 kernel: acpiphp: Slot [14] registered May 17 00:24:29.898210 kernel: acpiphp: Slot [15] registered May 17 00:24:29.898218 kernel: acpiphp: Slot [16] registered May 17 00:24:29.898229 kernel: acpiphp: Slot [17] registered May 17 00:24:29.898238 kernel: acpiphp: Slot [18] registered May 17 00:24:29.898247 kernel: acpiphp: Slot [19] registered May 17 00:24:29.898255 kernel: acpiphp: Slot [20] registered May 17 00:24:29.898264 kernel: acpiphp: Slot [21] registered May 17 00:24:29.898273 kernel: acpiphp: Slot [22] registered May 17 00:24:29.898282 kernel: acpiphp: Slot [23] registered May 17 00:24:29.898290 kernel: acpiphp: Slot [24] registered May 17 00:24:29.898299 kernel: acpiphp: Slot [25] registered May 17 00:24:29.898308 kernel: acpiphp: Slot [26] registered May 17 00:24:29.898319 kernel: acpiphp: Slot [27] registered May 17 00:24:29.898328 kernel: acpiphp: Slot [28] registered May 17 00:24:29.898336 kernel: acpiphp: Slot [29] registered May 17 00:24:29.898345 kernel: acpiphp: Slot [30] registered May 17 00:24:29.898353 kernel: acpiphp: Slot [31] registered May 17 00:24:29.898362 kernel: PCI host bridge to bus 0000:00 May 17 00:24:29.898454 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:24:29.899126 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:24:29.899230 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:24:29.899313 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:24:29.899395 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 17 00:24:29.899476 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:24:29.899611 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:24:29.899712 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:24:29.899822 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 17 00:24:29.899914 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 17 00:24:29.900005 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 17 00:24:29.900094 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 17 00:24:29.900184 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 17 00:24:29.900273 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 17 00:24:29.900362 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 17 00:24:29.900455 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 17 00:24:29.900568 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 17 00:24:29.900659 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 17 00:24:29.900748 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 17 00:24:29.900837 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 17 00:24:29.900926 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:24:29.901029 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 17 00:24:29.901128 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 17 00:24:29.901222 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 17 00:24:29.901314 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 17 00:24:29.901326 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:24:29.903539 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:24:29.903556 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:24:29.903565 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:24:29.903580 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:24:29.903589 kernel: iommu: Default domain type: Translated May 17 00:24:29.903598 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:24:29.903607 kernel: efivars: Registered efivars operations May 17 00:24:29.903616 kernel: PCI: Using ACPI for IRQ routing May 17 00:24:29.903625 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:24:29.903635 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 17 00:24:29.903644 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 17 00:24:29.903782 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 17 00:24:29.903883 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 17 00:24:29.903974 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:24:29.903986 kernel: vgaarb: loaded May 17 00:24:29.903996 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 17 00:24:29.904005 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 17 00:24:29.904013 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:24:29.904022 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:24:29.904031 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:24:29.904040 kernel: pnp: PnP ACPI init May 17 00:24:29.904052 kernel: pnp: PnP ACPI: found 5 devices May 17 00:24:29.904061 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:24:29.904070 kernel: NET: Registered PF_INET protocol family May 17 00:24:29.904079 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:24:29.904088 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:24:29.904097 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:24:29.904106 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:24:29.904115 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:24:29.904124 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:24:29.904135 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:24:29.904144 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:24:29.904153 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:24:29.904162 kernel: NET: Registered PF_XDP protocol family May 17 00:24:29.904261 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:24:29.904344 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:24:29.904424 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:24:29.905838 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:24:29.905959 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 17 00:24:29.906070 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:24:29.906084 kernel: PCI: CLS 0 bytes, default 64 May 17 00:24:29.906094 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:24:29.906104 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 17 00:24:29.906113 kernel: clocksource: Switched to clocksource tsc May 17 00:24:29.906122 kernel: Initialise system trusted keyrings May 17 00:24:29.906131 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:24:29.906143 kernel: Key type asymmetric registered May 17 00:24:29.906152 kernel: Asymmetric key parser 'x509' registered May 17 00:24:29.906161 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:24:29.906170 kernel: io scheduler mq-deadline registered May 17 00:24:29.906179 kernel: io scheduler kyber registered May 17 00:24:29.906188 kernel: io scheduler bfq registered May 17 00:24:29.906197 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:24:29.906206 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:24:29.906215 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:24:29.906224 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:24:29.906235 kernel: i8042: Warning: Keylock active May 17 00:24:29.906244 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:24:29.906253 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:24:29.906351 kernel: rtc_cmos 00:00: RTC can wake from S4 May 17 00:24:29.906438 kernel: rtc_cmos 00:00: registered as rtc0 May 17 00:24:29.906652 kernel: rtc_cmos 00:00: setting system clock to 2025-05-17T00:24:29 UTC (1747441469) May 17 00:24:29.906737 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 17 00:24:29.906754 kernel: intel_pstate: CPU model not supported May 17 00:24:29.906763 kernel: efifb: probing for efifb May 17 00:24:29.906772 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k May 17 00:24:29.906781 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 00:24:29.906790 kernel: efifb: scrolling: redraw May 17 00:24:29.906799 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:24:29.906809 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:24:29.906817 kernel: fb0: EFI VGA frame buffer device May 17 00:24:29.906827 kernel: pstore: Using crash dump compression: deflate May 17 00:24:29.906835 kernel: pstore: Registered efi_pstore as persistent store backend May 17 00:24:29.906847 kernel: NET: Registered PF_INET6 protocol family May 17 00:24:29.906856 kernel: Segment Routing with IPv6 May 17 00:24:29.906864 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:24:29.906873 kernel: NET: Registered PF_PACKET protocol family May 17 00:24:29.906882 kernel: Key type dns_resolver registered May 17 00:24:29.906891 kernel: IPI shorthand broadcast: enabled May 17 00:24:29.906918 kernel: sched_clock: Marking stable (481002769, 131807685)->(680533455, -67723001) May 17 00:24:29.906930 kernel: registered taskstats version 1 May 17 00:24:29.906940 kernel: Loading compiled-in X.509 certificates May 17 00:24:29.906952 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:24:29.906961 kernel: Key type .fscrypt registered May 17 00:24:29.906981 kernel: Key type fscrypt-provisioning registered May 17 00:24:29.906990 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:24:29.907000 kernel: ima: Allocated hash algorithm: sha1 May 17 00:24:29.907009 kernel: ima: No architecture policies found May 17 00:24:29.907018 kernel: clk: Disabling unused clocks May 17 00:24:29.907027 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:24:29.907040 kernel: Write protecting the kernel read-only data: 36864k May 17 00:24:29.907050 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:24:29.907059 kernel: Run /init as init process May 17 00:24:29.907068 kernel: with arguments: May 17 00:24:29.907077 kernel: /init May 17 00:24:29.907087 kernel: with environment: May 17 00:24:29.907096 kernel: HOME=/ May 17 00:24:29.907105 kernel: TERM=linux May 17 00:24:29.907114 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:24:29.907129 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:24:29.907141 systemd[1]: Detected virtualization amazon. May 17 00:24:29.907151 systemd[1]: Detected architecture x86-64. May 17 00:24:29.907161 systemd[1]: Running in initrd. May 17 00:24:29.907170 systemd[1]: No hostname configured, using default hostname. May 17 00:24:29.907179 systemd[1]: Hostname set to . May 17 00:24:29.907189 systemd[1]: Initializing machine ID from VM UUID. May 17 00:24:29.907202 systemd[1]: Queued start job for default target initrd.target. May 17 00:24:29.907211 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:24:29.907221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:24:29.907232 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:24:29.907241 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:24:29.907251 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:24:29.907261 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:24:29.907275 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:24:29.907285 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:24:29.907294 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:24:29.907304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:24:29.907314 systemd[1]: Reached target paths.target - Path Units. May 17 00:24:29.907327 systemd[1]: Reached target slices.target - Slice Units. May 17 00:24:29.907336 systemd[1]: Reached target swap.target - Swaps. May 17 00:24:29.907346 systemd[1]: Reached target timers.target - Timer Units. May 17 00:24:29.907356 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:24:29.907366 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:24:29.907376 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:24:29.907386 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:24:29.907396 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:24:29.907406 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:24:29.907418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:24:29.907428 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:24:29.907438 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:24:29.907448 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:24:29.907458 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:24:29.907467 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:24:29.907477 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:24:29.907497 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:24:29.907530 systemd-journald[178]: Collecting audit messages is disabled. May 17 00:24:29.907553 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:24:29.907563 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:24:29.907573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:24:29.907586 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:24:29.907597 systemd-journald[178]: Journal started May 17 00:24:29.907618 systemd-journald[178]: Runtime Journal (/run/log/journal/ec20170fb24f5a29351d133d59940101) is 4.7M, max 38.2M, 33.4M free. May 17 00:24:29.912500 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:24:29.914757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:24:29.921910 systemd-modules-load[179]: Inserted module 'overlay' May 17 00:24:29.924683 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:24:29.926675 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:24:29.937750 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:24:29.951436 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:24:29.959719 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:24:29.961499 kernel: Bridge firewalling registered May 17 00:24:29.961515 systemd-modules-load[179]: Inserted module 'br_netfilter' May 17 00:24:29.963710 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:24:29.965341 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:24:29.966347 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:24:29.967569 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:24:29.969892 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:24:29.971615 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:24:29.972717 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:24:29.986255 dracut-cmdline[208]: dracut-dracut-053 May 17 00:24:29.988761 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:24:29.991022 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:24:29.997952 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:24:30.003501 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:24:30.026073 systemd-resolved[228]: Positive Trust Anchors: May 17 00:24:30.026089 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:24:30.026126 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:24:30.031009 systemd-resolved[228]: Defaulting to hostname 'linux'. May 17 00:24:30.033857 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:24:30.034286 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:24:30.062515 kernel: SCSI subsystem initialized May 17 00:24:30.074519 kernel: Loading iSCSI transport class v2.0-870. May 17 00:24:30.085519 kernel: iscsi: registered transport (tcp) May 17 00:24:30.107639 kernel: iscsi: registered transport (qla4xxx) May 17 00:24:30.107720 kernel: QLogic iSCSI HBA Driver May 17 00:24:30.147293 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:24:30.157752 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:24:30.182740 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:24:30.182811 kernel: device-mapper: uevent: version 1.0.3 May 17 00:24:30.185513 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:24:30.226536 kernel: raid6: avx512x4 gen() 15355 MB/s May 17 00:24:30.244508 kernel: raid6: avx512x2 gen() 15530 MB/s May 17 00:24:30.262527 kernel: raid6: avx512x1 gen() 15322 MB/s May 17 00:24:30.280511 kernel: raid6: avx2x4 gen() 15241 MB/s May 17 00:24:30.298513 kernel: raid6: avx2x2 gen() 15184 MB/s May 17 00:24:30.316716 kernel: raid6: avx2x1 gen() 11614 MB/s May 17 00:24:30.316773 kernel: raid6: using algorithm avx512x2 gen() 15530 MB/s May 17 00:24:30.335701 kernel: raid6: .... xor() 24613 MB/s, rmw enabled May 17 00:24:30.335766 kernel: raid6: using avx512x2 recovery algorithm May 17 00:24:30.357523 kernel: xor: automatically using best checksumming function avx May 17 00:24:30.521515 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:24:30.531882 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:24:30.542708 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:24:30.555745 systemd-udevd[399]: Using default interface naming scheme 'v255'. May 17 00:24:30.560843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:24:30.566663 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:24:30.588655 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation May 17 00:24:30.618805 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:24:30.625649 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:24:30.675919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:24:30.685717 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:24:30.708835 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:24:30.714889 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:24:30.716557 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:24:30.718113 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:24:30.725119 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:24:30.755400 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:24:30.778533 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:24:30.786067 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 17 00:24:30.786360 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 17 00:24:30.811504 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 17 00:24:30.816719 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:24:30.816784 kernel: AES CTR mode by8 optimization enabled May 17 00:24:30.828140 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:24:30.828933 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:57:7e:e8:31:97 May 17 00:24:30.829474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:24:30.833388 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:24:30.838014 kernel: nvme nvme0: pci function 0000:00:04.0 May 17 00:24:30.838226 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:24:30.834551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:24:30.835224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:24:30.839187 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:24:30.848094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:24:30.855366 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 17 00:24:30.865064 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:24:30.865135 kernel: GPT:9289727 != 16777215 May 17 00:24:30.865154 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:24:30.865181 kernel: GPT:9289727 != 16777215 May 17 00:24:30.865198 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:24:30.865221 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:24:30.872693 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:24:30.873319 (udev-worker)[454]: Network interface NamePolicy= disabled on kernel command line. May 17 00:24:30.881686 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:24:30.902686 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:24:30.968660 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (459) May 17 00:24:30.979459 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (457) May 17 00:24:31.044026 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 17 00:24:31.050808 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 17 00:24:31.057351 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:24:31.063202 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 17 00:24:31.063815 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 17 00:24:31.070701 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:24:31.078129 disk-uuid[630]: Primary Header is updated. May 17 00:24:31.078129 disk-uuid[630]: Secondary Entries is updated. May 17 00:24:31.078129 disk-uuid[630]: Secondary Header is updated. May 17 00:24:31.083535 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:24:31.090508 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:24:31.096527 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:24:32.099568 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:24:32.099650 disk-uuid[631]: The operation has completed successfully. May 17 00:24:32.232467 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:24:32.232634 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:24:32.259746 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:24:32.264502 sh[974]: Success May 17 00:24:32.279532 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:24:32.374633 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:24:32.389349 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:24:32.390641 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:24:32.425507 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:24:32.425581 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:24:32.427759 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:24:32.429609 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:24:32.431833 kernel: BTRFS info (device dm-0): using free space tree May 17 00:24:32.549511 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:24:32.578378 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:24:32.579753 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:24:32.584706 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:24:32.588688 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:24:32.614841 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:24:32.614910 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:24:32.617876 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:24:32.624543 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:24:32.638431 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:24:32.640544 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:24:32.646885 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:24:32.656728 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:24:32.688382 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:24:32.693719 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:24:32.723720 systemd-networkd[1166]: lo: Link UP May 17 00:24:32.723732 systemd-networkd[1166]: lo: Gained carrier May 17 00:24:32.725625 systemd-networkd[1166]: Enumeration completed May 17 00:24:32.726082 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:24:32.726087 systemd-networkd[1166]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:24:32.728043 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:24:32.729596 systemd[1]: Reached target network.target - Network. May 17 00:24:32.730217 systemd-networkd[1166]: eth0: Link UP May 17 00:24:32.730223 systemd-networkd[1166]: eth0: Gained carrier May 17 00:24:32.730237 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:24:32.742578 systemd-networkd[1166]: eth0: DHCPv4 address 172.31.31.125/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:24:33.084470 ignition[1115]: Ignition 2.19.0 May 17 00:24:33.084510 ignition[1115]: Stage: fetch-offline May 17 00:24:33.084773 ignition[1115]: no configs at "/usr/lib/ignition/base.d" May 17 00:24:33.084785 ignition[1115]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:33.085222 ignition[1115]: Ignition finished successfully May 17 00:24:33.087451 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:24:33.093682 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:24:33.108426 ignition[1174]: Ignition 2.19.0 May 17 00:24:33.108436 ignition[1174]: Stage: fetch May 17 00:24:33.108938 ignition[1174]: no configs at "/usr/lib/ignition/base.d" May 17 00:24:33.108951 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:33.109158 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:33.139109 ignition[1174]: PUT result: OK May 17 00:24:33.140981 ignition[1174]: parsed url from cmdline: "" May 17 00:24:33.140991 ignition[1174]: no config URL provided May 17 00:24:33.140999 ignition[1174]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:24:33.141020 ignition[1174]: no config at "/usr/lib/ignition/user.ign" May 17 00:24:33.141041 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:33.141797 ignition[1174]: PUT result: OK May 17 00:24:33.141833 ignition[1174]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 17 00:24:33.142523 ignition[1174]: GET result: OK May 17 00:24:33.142604 ignition[1174]: parsing config with SHA512: 7adadf2594acf63415660e9e073400223f20619115abe8d4d9fec14bfcadbac15725ca3f9af3123b19e326d3455529173777efe6b8db61ecbbc75ac03bacb55a May 17 00:24:33.147741 unknown[1174]: fetched base config from "system" May 17 00:24:33.148639 unknown[1174]: fetched base config from "system" May 17 00:24:33.148649 unknown[1174]: fetched user config from "aws" May 17 00:24:33.150423 ignition[1174]: fetch: fetch complete May 17 00:24:33.150439 ignition[1174]: fetch: fetch passed May 17 00:24:33.150536 ignition[1174]: Ignition finished successfully May 17 00:24:33.153035 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:24:33.158740 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:24:33.174886 ignition[1180]: Ignition 2.19.0 May 17 00:24:33.174899 ignition[1180]: Stage: kargs May 17 00:24:33.175361 ignition[1180]: no configs at "/usr/lib/ignition/base.d" May 17 00:24:33.175376 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:33.175524 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:33.177832 ignition[1180]: PUT result: OK May 17 00:24:33.182224 ignition[1180]: kargs: kargs passed May 17 00:24:33.182297 ignition[1180]: Ignition finished successfully May 17 00:24:33.183287 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:24:33.186815 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:24:33.204094 ignition[1186]: Ignition 2.19.0 May 17 00:24:33.204108 ignition[1186]: Stage: disks May 17 00:24:33.204595 ignition[1186]: no configs at "/usr/lib/ignition/base.d" May 17 00:24:33.204609 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:33.204745 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:33.205753 ignition[1186]: PUT result: OK May 17 00:24:33.208189 ignition[1186]: disks: disks passed May 17 00:24:33.208267 ignition[1186]: Ignition finished successfully May 17 00:24:33.210052 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:24:33.210682 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:24:33.211037 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:24:33.211579 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:24:33.212121 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:24:33.212672 systemd[1]: Reached target basic.target - Basic System. May 17 00:24:33.217707 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:24:33.276520 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:24:33.279989 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:24:33.286637 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:24:33.389726 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:24:33.390377 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:24:33.391370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:24:33.403640 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:24:33.407609 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:24:33.408845 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:24:33.408894 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:24:33.408920 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:24:33.414468 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:24:33.416429 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:24:33.429534 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1213) May 17 00:24:33.434813 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:24:33.434893 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:24:33.434917 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:24:33.449650 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:24:33.450634 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:24:33.900054 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:24:33.904641 systemd-networkd[1166]: eth0: Gained IPv6LL May 17 00:24:33.917833 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory May 17 00:24:33.922192 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:24:33.941500 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:24:34.232397 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:24:34.237713 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:24:34.241680 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:24:34.249854 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:24:34.252528 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:24:34.284935 ignition[1326]: INFO : Ignition 2.19.0 May 17 00:24:34.286273 ignition[1326]: INFO : Stage: mount May 17 00:24:34.286273 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:24:34.286273 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:34.286273 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:34.288668 ignition[1326]: INFO : PUT result: OK May 17 00:24:34.291698 ignition[1326]: INFO : mount: mount passed May 17 00:24:34.291698 ignition[1326]: INFO : Ignition finished successfully May 17 00:24:34.292844 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:24:34.293466 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:24:34.297596 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:24:34.323740 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:24:34.343504 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1337) May 17 00:24:34.343558 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:24:34.346688 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:24:34.346750 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:24:34.353515 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:24:34.355621 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:24:34.379642 ignition[1353]: INFO : Ignition 2.19.0 May 17 00:24:34.379642 ignition[1353]: INFO : Stage: files May 17 00:24:34.380772 ignition[1353]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:24:34.380772 ignition[1353]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:34.380772 ignition[1353]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:34.381853 ignition[1353]: INFO : PUT result: OK May 17 00:24:34.383494 ignition[1353]: DEBUG : files: compiled without relabeling support, skipping May 17 00:24:34.386425 ignition[1353]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:24:34.386425 ignition[1353]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:24:34.407431 ignition[1353]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:24:34.408310 ignition[1353]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:24:34.408310 ignition[1353]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:24:34.408054 unknown[1353]: wrote ssh authorized keys file for user: core May 17 00:24:34.415577 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:24:34.416416 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:24:34.416416 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:24:34.416416 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:24:34.811602 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:24:34.946243 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:24:34.946243 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:24:34.948423 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:24:35.536068 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:24:35.866593 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:24:35.866593 ignition[1353]: INFO : files: op(c): [started] processing unit "containerd.service" May 17 00:24:35.869234 ignition[1353]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:24:35.870187 ignition[1353]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:24:35.870187 ignition[1353]: INFO : files: op(c): [finished] processing unit "containerd.service" May 17 00:24:35.870187 ignition[1353]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 17 00:24:35.870187 ignition[1353]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:24:35.870187 ignition[1353]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:24:35.870187 ignition[1353]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 17 00:24:35.870187 ignition[1353]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 00:24:35.870187 ignition[1353]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:24:35.870187 ignition[1353]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:24:35.870187 ignition[1353]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:24:35.870187 ignition[1353]: INFO : files: files passed May 17 00:24:35.870187 ignition[1353]: INFO : Ignition finished successfully May 17 00:24:35.871130 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:24:35.877707 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:24:35.880595 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:24:35.883629 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:24:35.883730 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:24:35.895553 initrd-setup-root-after-ignition[1382]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:24:35.895553 initrd-setup-root-after-ignition[1382]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:24:35.898538 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:24:35.898195 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:24:35.899149 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:24:35.904711 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:24:35.932921 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:24:35.933123 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:24:35.934586 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:24:35.935432 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:24:35.936237 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:24:35.943714 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:24:35.956701 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:24:35.961698 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:24:35.982711 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:24:35.983314 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:24:35.984168 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:24:35.984930 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:24:35.985146 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:24:35.986092 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:24:35.986875 systemd[1]: Stopped target basic.target - Basic System. May 17 00:24:35.987561 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:24:35.988209 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:24:35.988882 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:24:35.989667 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:24:35.990365 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:24:35.991046 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:24:35.992029 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:24:35.992768 systemd[1]: Stopped target swap.target - Swaps. May 17 00:24:35.993556 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:24:35.993684 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:24:35.994557 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:24:35.995252 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:24:35.995862 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:24:35.996559 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:24:35.996994 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:24:35.997186 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:24:35.998373 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:24:35.998518 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:24:35.999130 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:24:35.999230 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:24:36.005671 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:24:36.009742 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:24:36.010569 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:24:36.011035 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:24:36.011890 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:24:36.011991 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:24:36.016819 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:24:36.016911 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:24:36.022293 ignition[1406]: INFO : Ignition 2.19.0 May 17 00:24:36.022293 ignition[1406]: INFO : Stage: umount May 17 00:24:36.022293 ignition[1406]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:24:36.022293 ignition[1406]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:36.022293 ignition[1406]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:36.024344 ignition[1406]: INFO : PUT result: OK May 17 00:24:36.026891 ignition[1406]: INFO : umount: umount passed May 17 00:24:36.027608 ignition[1406]: INFO : Ignition finished successfully May 17 00:24:36.027930 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:24:36.028029 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:24:36.029772 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:24:36.030206 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:24:36.030615 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:24:36.030669 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:24:36.031231 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:24:36.031272 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:24:36.032765 systemd[1]: Stopped target network.target - Network. May 17 00:24:36.033456 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:24:36.033843 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:24:36.034166 systemd[1]: Stopped target paths.target - Path Units. May 17 00:24:36.034433 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:24:36.036543 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:24:36.036863 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:24:36.037238 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:24:36.037559 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:24:36.037595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:24:36.037929 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:24:36.037964 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:24:36.038792 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:24:36.038845 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:24:36.039547 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:24:36.039592 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:24:36.040234 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:24:36.040733 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:24:36.043529 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:24:36.044556 systemd-networkd[1166]: eth0: DHCPv6 lease lost May 17 00:24:36.046204 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:24:36.046293 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:24:36.047107 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:24:36.047193 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:24:36.048799 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:24:36.048848 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:24:36.049662 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:24:36.049713 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:24:36.056766 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:24:36.057306 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:24:36.057373 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:24:36.057921 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:24:36.061408 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:24:36.061523 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:24:36.069600 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:24:36.069715 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:24:36.070235 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:24:36.070279 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:24:36.071179 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:24:36.071225 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:24:36.073118 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:24:36.073254 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:24:36.074200 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:24:36.074281 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:24:36.075679 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:24:36.075750 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:24:36.076562 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:24:36.076595 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:24:36.077291 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:24:36.077335 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:24:36.078358 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:24:36.078407 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:24:36.079473 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:24:36.079545 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:24:36.086663 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:24:36.087089 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:24:36.087152 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:24:36.088923 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:24:36.088975 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:24:36.089432 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:24:36.089473 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:24:36.089826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:24:36.089865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:24:36.093790 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:24:36.093891 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:24:36.094830 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:24:36.096372 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:24:36.172184 systemd[1]: Switching root. May 17 00:24:36.185767 systemd-journald[178]: Journal stopped May 17 00:24:38.108287 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). May 17 00:24:38.108391 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:24:38.108414 kernel: SELinux: policy capability open_perms=1 May 17 00:24:38.108432 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:24:38.108450 kernel: SELinux: policy capability always_check_network=0 May 17 00:24:38.108468 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:24:38.110534 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:24:38.110569 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:24:38.110593 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:24:38.110618 kernel: audit: type=1403 audit(1747441476.920:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:24:38.110638 systemd[1]: Successfully loaded SELinux policy in 110.190ms. May 17 00:24:38.110670 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.361ms. May 17 00:24:38.110694 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:24:38.110716 systemd[1]: Detected virtualization amazon. May 17 00:24:38.110738 systemd[1]: Detected architecture x86-64. May 17 00:24:38.110758 systemd[1]: Detected first boot. May 17 00:24:38.110781 systemd[1]: Initializing machine ID from VM UUID. May 17 00:24:38.110806 zram_generator::config[1465]: No configuration found. May 17 00:24:38.110833 systemd[1]: Populated /etc with preset unit settings. May 17 00:24:38.110854 systemd[1]: Queued start job for default target multi-user.target. May 17 00:24:38.110877 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 17 00:24:38.110899 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:24:38.110921 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:24:38.110943 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:24:38.110964 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:24:38.110989 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:24:38.111011 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:24:38.111032 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:24:38.111053 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:24:38.111075 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:24:38.111097 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:24:38.111119 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:24:38.111140 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:24:38.111162 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:24:38.111187 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:24:38.111206 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:24:38.111224 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:24:38.111243 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:24:38.111262 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:24:38.111282 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:24:38.111303 systemd[1]: Reached target slices.target - Slice Units. May 17 00:24:38.111321 systemd[1]: Reached target swap.target - Swaps. May 17 00:24:38.111344 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:24:38.111363 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:24:38.111384 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:24:38.111405 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:24:38.111430 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:24:38.111448 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:24:38.111466 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:24:38.114256 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:24:38.114292 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:24:38.114321 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:24:38.114340 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:24:38.114359 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:24:38.114378 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:24:38.114396 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:24:38.114415 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:24:38.114435 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:24:38.114453 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:24:38.114473 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:24:38.114521 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:24:38.114540 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:24:38.114559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:24:38.114577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:24:38.114595 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:24:38.114614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:24:38.114634 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:24:38.114654 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:24:38.114677 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 17 00:24:38.114696 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:24:38.114715 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:24:38.114733 kernel: loop: module loaded May 17 00:24:38.114752 kernel: fuse: init (API version 7.39) May 17 00:24:38.114771 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:24:38.114790 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:24:38.115924 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:24:38.115955 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:24:38.115985 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:24:38.116006 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:24:38.116027 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:24:38.116049 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:24:38.116070 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:24:38.116091 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:24:38.116114 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:24:38.116178 systemd-journald[1569]: Collecting audit messages is disabled. May 17 00:24:38.116228 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:24:38.116250 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:24:38.116271 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:24:38.116376 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:24:38.116398 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:24:38.116417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:24:38.116438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:24:38.116462 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:24:38.116504 systemd-journald[1569]: Journal started May 17 00:24:38.116544 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec20170fb24f5a29351d133d59940101) is 4.7M, max 38.2M, 33.4M free. May 17 00:24:38.119013 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:24:38.145959 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:24:38.122954 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:24:38.155671 kernel: ACPI: bus type drm_connector registered May 17 00:24:38.123173 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:24:38.124215 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:24:38.125228 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:24:38.126212 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:24:38.133473 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:24:38.150244 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:24:38.157757 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:24:38.159712 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:24:38.171560 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:24:38.174720 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:24:38.175399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:24:38.191874 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:24:38.192785 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:24:38.196902 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:24:38.209838 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:24:38.219444 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:24:38.220444 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:24:38.227013 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:24:38.227882 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:24:38.229112 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:24:38.239019 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec20170fb24f5a29351d133d59940101 is 69.263ms for 971 entries. May 17 00:24:38.239019 systemd-journald[1569]: System Journal (/var/log/journal/ec20170fb24f5a29351d133d59940101) is 8.0M, max 195.6M, 187.6M free. May 17 00:24:38.327871 systemd-journald[1569]: Received client request to flush runtime journal. May 17 00:24:38.248796 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:24:38.295739 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:24:38.306772 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:24:38.309145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:24:38.326563 udevadm[1628]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:24:38.330516 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:24:38.347524 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. May 17 00:24:38.347553 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. May 17 00:24:38.356330 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:24:38.364795 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:24:38.423026 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:24:38.434693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:24:38.456640 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. May 17 00:24:38.457102 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. May 17 00:24:38.464242 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:24:39.000920 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:24:39.006748 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:24:39.043022 systemd-udevd[1645]: Using default interface naming scheme 'v255'. May 17 00:24:39.106416 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:24:39.116670 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:24:39.149734 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:24:39.159821 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 17 00:24:39.161732 (udev-worker)[1647]: Network interface NamePolicy= disabled on kernel command line. May 17 00:24:39.221669 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:24:39.224509 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 17 00:24:39.229509 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:24:39.241521 kernel: ACPI: button: Power Button [PWRF] May 17 00:24:39.243533 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 17 00:24:39.252695 kernel: ACPI: button: Sleep Button [SLPF] May 17 00:24:39.299509 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:24:39.298610 systemd-networkd[1648]: lo: Link UP May 17 00:24:39.298616 systemd-networkd[1648]: lo: Gained carrier May 17 00:24:39.300990 systemd-networkd[1648]: Enumeration completed May 17 00:24:39.301143 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:24:39.301883 systemd-networkd[1648]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:24:39.301887 systemd-networkd[1648]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:24:39.306519 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:24:39.311282 systemd-networkd[1648]: eth0: Link UP May 17 00:24:39.311601 systemd-networkd[1648]: eth0: Gained carrier May 17 00:24:39.311790 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:24:39.312360 systemd-networkd[1648]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:24:39.322670 systemd-networkd[1648]: eth0: DHCPv4 address 172.31.31.125/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:24:39.330016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:24:39.376516 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1653) May 17 00:24:39.518607 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:24:39.519963 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:24:39.532723 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:24:39.539652 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:24:39.564510 lvm[1769]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:24:39.590581 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:24:39.591230 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:24:39.597692 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:24:39.603612 lvm[1772]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:24:39.630948 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:24:39.632150 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:24:39.632693 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:24:39.632804 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:24:39.633251 systemd[1]: Reached target machines.target - Containers. May 17 00:24:39.634804 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:24:39.642798 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:24:39.646653 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:24:39.647202 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:24:39.648094 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:24:39.651731 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:24:39.654742 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:24:39.657217 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:24:39.682716 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:24:39.686764 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:24:39.689209 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:24:39.707550 kernel: loop0: detected capacity change from 0 to 61336 May 17 00:24:39.811521 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:24:39.842500 kernel: loop1: detected capacity change from 0 to 142488 May 17 00:24:39.942526 kernel: loop2: detected capacity change from 0 to 221472 May 17 00:24:40.050518 kernel: loop3: detected capacity change from 0 to 140768 May 17 00:24:40.171508 kernel: loop4: detected capacity change from 0 to 61336 May 17 00:24:40.193531 kernel: loop5: detected capacity change from 0 to 142488 May 17 00:24:40.218504 kernel: loop6: detected capacity change from 0 to 221472 May 17 00:24:40.250526 kernel: loop7: detected capacity change from 0 to 140768 May 17 00:24:40.269704 (sd-merge)[1793]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 17 00:24:40.270469 (sd-merge)[1793]: Merged extensions into '/usr'. May 17 00:24:40.274559 systemd[1]: Reloading requested from client PID 1780 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:24:40.274577 systemd[1]: Reloading... May 17 00:24:40.332299 zram_generator::config[1821]: No configuration found. May 17 00:24:40.486460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:24:40.576689 systemd[1]: Reloading finished in 301 ms. May 17 00:24:40.592078 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:24:40.602707 systemd[1]: Starting ensure-sysext.service... May 17 00:24:40.607696 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:24:40.624619 systemd-networkd[1648]: eth0: Gained IPv6LL May 17 00:24:40.634066 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:24:40.643660 systemd[1]: Reloading requested from client PID 1878 ('systemctl') (unit ensure-sysext.service)... May 17 00:24:40.643679 systemd[1]: Reloading... May 17 00:24:40.648312 systemd-tmpfiles[1879]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:24:40.648865 systemd-tmpfiles[1879]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:24:40.650212 systemd-tmpfiles[1879]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:24:40.650754 systemd-tmpfiles[1879]: ACLs are not supported, ignoring. May 17 00:24:40.650882 systemd-tmpfiles[1879]: ACLs are not supported, ignoring. May 17 00:24:40.667476 systemd-tmpfiles[1879]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:24:40.667511 systemd-tmpfiles[1879]: Skipping /boot May 17 00:24:40.681029 systemd-tmpfiles[1879]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:24:40.681180 systemd-tmpfiles[1879]: Skipping /boot May 17 00:24:40.805504 zram_generator::config[1921]: No configuration found. May 17 00:24:40.926720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:24:41.008045 systemd[1]: Reloading finished in 363 ms. May 17 00:24:41.027199 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:24:41.041685 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:24:41.045586 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:24:41.055047 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:24:41.065655 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:24:41.072013 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:24:41.082102 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:24:41.082408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:24:41.090964 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:24:41.103159 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:24:41.108237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:24:41.113267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:24:41.113678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:24:41.123512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:24:41.123761 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:24:41.126210 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:24:41.126452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:24:41.154314 augenrules[1997]: No rules May 17 00:24:41.154029 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:24:41.154607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:24:41.159545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:24:41.164463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:24:41.172300 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:24:41.187996 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:24:41.188709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:24:41.189059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:24:41.192398 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:24:41.196384 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:24:41.205115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:24:41.205378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:24:41.207851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:24:41.208178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:24:41.213082 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:24:41.243433 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:24:41.244256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:24:41.252948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:24:41.268077 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:24:41.285817 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:24:41.297279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:24:41.298649 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:24:41.299057 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:24:41.300099 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:24:41.302912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:24:41.303142 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:24:41.305144 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:24:41.305373 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:24:41.312603 systemd-resolved[1975]: Positive Trust Anchors: May 17 00:24:41.313013 systemd-resolved[1975]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:24:41.313142 systemd-resolved[1975]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:24:41.316389 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:24:41.318033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:24:41.318280 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:24:41.321840 systemd[1]: Finished ensure-sysext.service. May 17 00:24:41.324959 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:24:41.325224 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:24:41.334779 systemd-resolved[1975]: Defaulting to hostname 'linux'. May 17 00:24:41.338418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:24:41.339164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:24:41.339219 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:24:41.339546 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:24:41.340256 systemd[1]: Reached target network.target - Network. May 17 00:24:41.340958 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:24:41.341734 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:24:41.344880 ldconfig[1776]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:24:41.350727 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:24:41.357803 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:24:41.370299 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:24:41.370903 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:24:41.372053 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:24:41.372470 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:24:41.373150 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:24:41.373600 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:24:41.373912 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:24:41.374238 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:24:41.374271 systemd[1]: Reached target paths.target - Path Units. May 17 00:24:41.374727 systemd[1]: Reached target timers.target - Timer Units. May 17 00:24:41.375632 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:24:41.377449 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:24:41.378898 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:24:41.382558 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:24:41.382936 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:24:41.383248 systemd[1]: Reached target basic.target - Basic System. May 17 00:24:41.383711 systemd[1]: System is tainted: cgroupsv1 May 17 00:24:41.383745 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:24:41.383765 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:24:41.386581 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:24:41.393696 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:24:41.398664 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:24:41.410684 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:24:41.415721 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:24:41.416340 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:24:41.421446 jq[2047]: false May 17 00:24:41.422545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:24:41.435707 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:24:41.456695 systemd[1]: Started ntpd.service - Network Time Service. May 17 00:24:41.464039 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:24:41.467102 dbus-daemon[2045]: [system] SELinux support is enabled May 17 00:24:41.472704 dbus-daemon[2045]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1648 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:24:41.488570 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:24:41.497782 extend-filesystems[2048]: Found loop4 May 17 00:24:41.497782 extend-filesystems[2048]: Found loop5 May 17 00:24:41.497782 extend-filesystems[2048]: Found loop6 May 17 00:24:41.497782 extend-filesystems[2048]: Found loop7 May 17 00:24:41.497782 extend-filesystems[2048]: Found nvme0n1 May 17 00:24:41.497782 extend-filesystems[2048]: Found nvme0n1p1 May 17 00:24:41.497782 extend-filesystems[2048]: Found nvme0n1p2 May 17 00:24:41.497782 extend-filesystems[2048]: Found nvme0n1p3 May 17 00:24:41.497782 extend-filesystems[2048]: Found usr May 17 00:24:41.497782 extend-filesystems[2048]: Found nvme0n1p4 May 17 00:24:41.497782 extend-filesystems[2048]: Found nvme0n1p6 May 17 00:24:41.497782 extend-filesystems[2048]: Found nvme0n1p7 May 17 00:24:41.497782 extend-filesystems[2048]: Found nvme0n1p9 May 17 00:24:41.497782 extend-filesystems[2048]: Checking size of /dev/nvme0n1p9 May 17 00:24:41.501319 systemd[1]: Starting setup-oem.service - Setup OEM... May 17 00:24:41.532788 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:24:41.537170 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:24:41.564794 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:24:41.566397 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:24:41.573794 extend-filesystems[2048]: Resized partition /dev/nvme0n1p9 May 17 00:24:41.582586 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:24:41.594807 extend-filesystems[2082]: resize2fs 1.47.1 (20-May-2024) May 17 00:24:41.592641 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:24:41.598903 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:24:41.612634 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 17 00:24:41.628821 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:24:41.643056 ntpd[2055]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:07:47 UTC 2025 (1): Starting May 17 00:24:41.659433 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:07:47 UTC 2025 (1): Starting May 17 00:24:41.659433 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:24:41.659433 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: ---------------------------------------------------- May 17 00:24:41.659433 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: ntp-4 is maintained by Network Time Foundation, May 17 00:24:41.659433 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:24:41.659433 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: corporation. Support and training for ntp-4 are May 17 00:24:41.659433 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: available at https://www.nwtime.org/support May 17 00:24:41.659433 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: ---------------------------------------------------- May 17 00:24:41.659433 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: proto: precision = 0.066 usec (-24) May 17 00:24:41.659433 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: basedate set to 2025-05-04 May 17 00:24:41.659433 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: gps base set to 2025-05-04 (week 2365) May 17 00:24:41.653515 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:24:41.643087 ntpd[2055]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:24:41.660696 coreos-metadata[2044]: May 17 00:24:41.652 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:24:41.660696 coreos-metadata[2044]: May 17 00:24:41.658 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 17 00:24:41.660696 coreos-metadata[2044]: May 17 00:24:41.659 INFO Fetch successful May 17 00:24:41.660696 coreos-metadata[2044]: May 17 00:24:41.659 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 17 00:24:41.661077 update_engine[2081]: I20250517 00:24:41.650630 2081 main.cc:92] Flatcar Update Engine starting May 17 00:24:41.661077 update_engine[2081]: I20250517 00:24:41.652707 2081 update_check_scheduler.cc:74] Next update check in 10m20s May 17 00:24:41.643100 ntpd[2055]: ---------------------------------------------------- May 17 00:24:41.643113 ntpd[2055]: ntp-4 is maintained by Network Time Foundation, May 17 00:24:41.643123 ntpd[2055]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:24:41.643133 ntpd[2055]: corporation. Support and training for ntp-4 are May 17 00:24:41.643143 ntpd[2055]: available at https://www.nwtime.org/support May 17 00:24:41.643152 ntpd[2055]: ---------------------------------------------------- May 17 00:24:41.653039 ntpd[2055]: proto: precision = 0.066 usec (-24) May 17 00:24:41.654121 ntpd[2055]: basedate set to 2025-05-04 May 17 00:24:41.654142 ntpd[2055]: gps base set to 2025-05-04 (week 2365) May 17 00:24:41.662543 coreos-metadata[2044]: May 17 00:24:41.662 INFO Fetch successful May 17 00:24:41.662543 coreos-metadata[2044]: May 17 00:24:41.662 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 17 00:24:41.663150 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:24:41.663452 coreos-metadata[2044]: May 17 00:24:41.663 INFO Fetch successful May 17 00:24:41.663452 coreos-metadata[2044]: May 17 00:24:41.663 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 17 00:24:41.663531 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:24:41.664241 coreos-metadata[2044]: May 17 00:24:41.664 INFO Fetch successful May 17 00:24:41.664241 coreos-metadata[2044]: May 17 00:24:41.664 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 17 00:24:41.665142 coreos-metadata[2044]: May 17 00:24:41.665 INFO Fetch failed with 404: resource not found May 17 00:24:41.665142 coreos-metadata[2044]: May 17 00:24:41.665 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 17 00:24:41.666074 coreos-metadata[2044]: May 17 00:24:41.665 INFO Fetch successful May 17 00:24:41.666222 coreos-metadata[2044]: May 17 00:24:41.666 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 17 00:24:41.667017 coreos-metadata[2044]: May 17 00:24:41.666 INFO Fetch successful May 17 00:24:41.667359 coreos-metadata[2044]: May 17 00:24:41.667 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 17 00:24:41.667362 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:24:41.668183 coreos-metadata[2044]: May 17 00:24:41.667 INFO Fetch successful May 17 00:24:41.668183 coreos-metadata[2044]: May 17 00:24:41.668 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 17 00:24:41.669606 coreos-metadata[2044]: May 17 00:24:41.669 INFO Fetch successful May 17 00:24:41.669606 coreos-metadata[2044]: May 17 00:24:41.669 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 17 00:24:41.672676 coreos-metadata[2044]: May 17 00:24:41.670 INFO Fetch successful May 17 00:24:41.679575 ntpd[2055]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:24:41.680914 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:24:41.682590 ntpd[2055]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:24:41.682708 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:24:41.682841 ntpd[2055]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:24:41.682901 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:24:41.682956 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: Listen normally on 3 eth0 172.31.31.125:123 May 17 00:24:41.682906 ntpd[2055]: Listen normally on 3 eth0 172.31.31.125:123 May 17 00:24:41.683062 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: Listen normally on 4 lo [::1]:123 May 17 00:24:41.683062 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: Listen normally on 5 eth0 [fe80::457:7eff:fee8:3197%2]:123 May 17 00:24:41.683062 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: Listening on routing socket on fd #22 for interface updates May 17 00:24:41.682954 ntpd[2055]: Listen normally on 4 lo [::1]:123 May 17 00:24:41.683007 ntpd[2055]: Listen normally on 5 eth0 [fe80::457:7eff:fee8:3197%2]:123 May 17 00:24:41.683057 ntpd[2055]: Listening on routing socket on fd #22 for interface updates May 17 00:24:41.701114 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:24:41.701447 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:24:41.714557 jq[2085]: true May 17 00:24:41.726024 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:24:41.726024 ntpd[2055]: 17 May 00:24:41 ntpd[2055]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:24:41.722844 ntpd[2055]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:24:41.722886 ntpd[2055]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:24:41.754563 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1653) May 17 00:24:41.761166 (ntainerd)[2119]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:24:41.795513 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 17 00:24:41.809538 jq[2122]: true May 17 00:24:41.810625 extend-filesystems[2082]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:24:41.810625 extend-filesystems[2082]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:24:41.810625 extend-filesystems[2082]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 17 00:24:41.840699 extend-filesystems[2048]: Resized filesystem in /dev/nvme0n1p9 May 17 00:24:41.821938 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:24:41.819236 dbus-daemon[2045]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:24:41.822270 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:24:41.842136 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:24:41.878893 tar[2100]: linux-amd64/helm May 17 00:24:41.881580 systemd[1]: Started update-engine.service - Update Engine. May 17 00:24:41.887319 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:24:41.887445 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:24:41.887475 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:24:41.907049 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 17 00:24:41.909773 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:24:41.909818 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:24:41.916978 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:24:41.921334 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:24:41.923424 systemd[1]: Finished setup-oem.service - Setup OEM. May 17 00:24:41.945721 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 17 00:24:42.088891 bash[2223]: Updated "/home/core/.ssh/authorized_keys" May 17 00:24:42.093064 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:24:42.121768 systemd[1]: Starting sshkeys.service... May 17 00:24:42.208969 systemd-logind[2074]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:24:42.209003 systemd-logind[2074]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 00:24:42.209028 systemd-logind[2074]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:24:42.211076 systemd-logind[2074]: New seat seat0. May 17 00:24:42.214997 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:24:42.224619 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:24:42.236448 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:24:42.282710 amazon-ssm-agent[2192]: Initializing new seelog logger May 17 00:24:42.287767 amazon-ssm-agent[2192]: New Seelog Logger Creation Complete May 17 00:24:42.288621 amazon-ssm-agent[2192]: 2025/05/17 00:24:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:24:42.288725 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:24:42.293831 amazon-ssm-agent[2192]: 2025/05/17 00:24:42 processing appconfig overrides May 17 00:24:42.297698 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO Proxy environment variables: May 17 00:24:42.302893 amazon-ssm-agent[2192]: 2025/05/17 00:24:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:24:42.302893 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:24:42.302893 amazon-ssm-agent[2192]: 2025/05/17 00:24:42 processing appconfig overrides May 17 00:24:42.302893 amazon-ssm-agent[2192]: 2025/05/17 00:24:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:24:42.302893 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:24:42.302893 amazon-ssm-agent[2192]: 2025/05/17 00:24:42 processing appconfig overrides May 17 00:24:42.317843 amazon-ssm-agent[2192]: 2025/05/17 00:24:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:24:42.317843 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:24:42.317843 amazon-ssm-agent[2192]: 2025/05/17 00:24:42 processing appconfig overrides May 17 00:24:42.376867 locksmithd[2190]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:24:42.403813 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO https_proxy: May 17 00:24:42.505770 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO http_proxy: May 17 00:24:42.540354 coreos-metadata[2255]: May 17 00:24:42.539 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:24:42.544331 coreos-metadata[2255]: May 17 00:24:42.542 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 17 00:24:42.548429 coreos-metadata[2255]: May 17 00:24:42.548 INFO Fetch successful May 17 00:24:42.548539 coreos-metadata[2255]: May 17 00:24:42.548 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 17 00:24:42.553995 dbus-daemon[2045]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:24:42.554175 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 17 00:24:42.556895 coreos-metadata[2255]: May 17 00:24:42.556 INFO Fetch successful May 17 00:24:42.557973 dbus-daemon[2045]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2184 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:24:42.568977 unknown[2255]: wrote ssh authorized keys file for user: core May 17 00:24:42.569972 systemd[1]: Starting polkit.service - Authorization Manager... May 17 00:24:42.602769 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO no_proxy: May 17 00:24:42.631873 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:24:42.639409 update-ssh-keys[2278]: Updated "/home/core/.ssh/authorized_keys" May 17 00:24:42.636954 systemd[1]: Finished sshkeys.service. May 17 00:24:42.653086 polkitd[2276]: Started polkitd version 121 May 17 00:24:42.689448 polkitd[2276]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:24:42.693459 polkitd[2276]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:24:42.696379 polkitd[2276]: Finished loading, compiling and executing 2 rules May 17 00:24:42.698255 dbus-daemon[2045]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:24:42.698461 systemd[1]: Started polkit.service - Authorization Manager. May 17 00:24:42.701757 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO Checking if agent identity type OnPrem can be assumed May 17 00:24:42.702250 polkitd[2276]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:24:42.765323 systemd-hostnamed[2184]: Hostname set to (transient) May 17 00:24:42.767135 systemd-resolved[1975]: System hostname changed to 'ip-172-31-31-125'. May 17 00:24:42.803862 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO Checking if agent identity type EC2 can be assumed May 17 00:24:42.903620 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO Agent will take identity from EC2 May 17 00:24:42.916393 containerd[2119]: time="2025-05-17T00:24:42.916280824Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:24:43.003030 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:24:43.076245 containerd[2119]: time="2025-05-17T00:24:43.074055920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:24:43.076245 containerd[2119]: time="2025-05-17T00:24:43.076034920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:24:43.076245 containerd[2119]: time="2025-05-17T00:24:43.076078645Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:24:43.076245 containerd[2119]: time="2025-05-17T00:24:43.076104053Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:24:43.076494 containerd[2119]: time="2025-05-17T00:24:43.076282464Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:24:43.076494 containerd[2119]: time="2025-05-17T00:24:43.076302560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:24:43.076494 containerd[2119]: time="2025-05-17T00:24:43.076366517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:24:43.076494 containerd[2119]: time="2025-05-17T00:24:43.076384056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:24:43.081104 containerd[2119]: time="2025-05-17T00:24:43.080751514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:24:43.081104 containerd[2119]: time="2025-05-17T00:24:43.080796715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:24:43.081104 containerd[2119]: time="2025-05-17T00:24:43.080821587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:24:43.081104 containerd[2119]: time="2025-05-17T00:24:43.080837743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:24:43.081104 containerd[2119]: time="2025-05-17T00:24:43.081017853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:24:43.081363 containerd[2119]: time="2025-05-17T00:24:43.081288619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:24:43.082499 containerd[2119]: time="2025-05-17T00:24:43.081532612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:24:43.082499 containerd[2119]: time="2025-05-17T00:24:43.081558636Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:24:43.082499 containerd[2119]: time="2025-05-17T00:24:43.081666040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:24:43.082499 containerd[2119]: time="2025-05-17T00:24:43.081726835Z" level=info msg="metadata content store policy set" policy=shared May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.091425239Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.091518859Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.091545058Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.091566985Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.091587458Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.091759957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.092205667Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.092311878Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.092331978Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.092350781Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.092371620Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.092391265Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.092409572Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:24:43.092746 containerd[2119]: time="2025-05-17T00:24:43.092429740Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092451632Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092470695Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092512967Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092534528Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092572048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092598283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092622749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092646018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092663725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092682379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092699448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092721150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092742480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093318 containerd[2119]: time="2025-05-17T00:24:43.092761526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.092778020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.092795047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.092810720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.092831755Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.092860303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.092884767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.092901507Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.092956467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.092981063Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.093008640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.093027900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.093044955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.093066452Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:24:43.093850 containerd[2119]: time="2025-05-17T00:24:43.093081405Z" level=info msg="NRI interface is disabled by configuration." May 17 00:24:43.094365 containerd[2119]: time="2025-05-17T00:24:43.093096818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:24:43.104034 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.097794995Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.097907154Z" level=info msg="Connect containerd service" May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.097962227Z" level=info msg="using legacy CRI server" May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.097974595Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.098118595Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.100170061Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.102575859Z" level=info msg="Start subscribing containerd event" May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.102810064Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.102869898Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.103624397Z" level=info msg="Start recovering state" May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.103739394Z" level=info msg="Start event monitor" May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.103762581Z" level=info msg="Start snapshots syncer" May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.103780375Z" level=info msg="Start cni network conf syncer for default" May 17 00:24:43.104800 containerd[2119]: time="2025-05-17T00:24:43.103800761Z" level=info msg="Start streaming server" May 17 00:24:43.105924 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:24:43.108606 containerd[2119]: time="2025-05-17T00:24:43.107531142Z" level=info msg="containerd successfully booted in 0.196135s" May 17 00:24:43.204537 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:24:43.303895 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 17 00:24:43.406500 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 17 00:24:43.427511 tar[2100]: linux-amd64/LICENSE May 17 00:24:43.427511 tar[2100]: linux-amd64/README.md May 17 00:24:43.452852 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:24:43.456846 sshd_keygen[2097]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:24:43.508576 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO [amazon-ssm-agent] Starting Core Agent May 17 00:24:43.503456 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:24:43.514913 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:24:43.545207 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:24:43.547592 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:24:43.563706 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:24:43.576603 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:24:43.586945 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:24:43.598959 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:24:43.599819 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:24:43.607868 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 17 00:24:43.708246 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO [Registrar] Starting registrar module May 17 00:24:43.808534 amazon-ssm-agent[2192]: 2025-05-17 00:24:42 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 17 00:24:43.870385 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:24:43.877259 systemd[1]: Started sshd@0-172.31.31.125:22-147.75.109.163:34506.service - OpenSSH per-connection server daemon (147.75.109.163:34506). May 17 00:24:44.107191 sshd[2322]: Accepted publickey for core from 147.75.109.163 port 34506 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:24:44.109056 sshd[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:44.119093 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:24:44.125804 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:24:44.129198 systemd-logind[2074]: New session 1 of user core. May 17 00:24:44.144784 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:24:44.157261 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:24:44.164421 (systemd)[2328]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:24:44.301130 systemd[2328]: Queued start job for default target default.target. May 17 00:24:44.302037 systemd[2328]: Created slice app.slice - User Application Slice. May 17 00:24:44.302066 systemd[2328]: Reached target paths.target - Paths. May 17 00:24:44.302085 systemd[2328]: Reached target timers.target - Timers. May 17 00:24:44.307168 systemd[2328]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:24:44.316717 systemd[2328]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:24:44.316801 systemd[2328]: Reached target sockets.target - Sockets. May 17 00:24:44.316820 systemd[2328]: Reached target basic.target - Basic System. May 17 00:24:44.316870 systemd[2328]: Reached target default.target - Main User Target. May 17 00:24:44.316906 systemd[2328]: Startup finished in 145ms. May 17 00:24:44.317445 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:24:44.324380 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:24:44.466823 systemd[1]: Started sshd@1-172.31.31.125:22-147.75.109.163:34508.service - OpenSSH per-connection server daemon (147.75.109.163:34508). May 17 00:24:44.619011 sshd[2340]: Accepted publickey for core from 147.75.109.163 port 34508 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:24:44.620471 sshd[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:44.625619 systemd-logind[2074]: New session 2 of user core. May 17 00:24:44.633315 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:24:44.766215 sshd[2340]: pam_unix(sshd:session): session closed for user core May 17 00:24:44.770983 systemd[1]: sshd@1-172.31.31.125:22-147.75.109.163:34508.service: Deactivated successfully. May 17 00:24:44.775247 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:24:44.775991 systemd-logind[2074]: Session 2 logged out. Waiting for processes to exit. May 17 00:24:44.777271 systemd-logind[2074]: Removed session 2. May 17 00:24:44.794802 systemd[1]: Started sshd@2-172.31.31.125:22-147.75.109.163:34520.service - OpenSSH per-connection server daemon (147.75.109.163:34520). May 17 00:24:44.903704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:24:44.906141 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:24:44.909568 systemd[1]: Startup finished in 7.765s (kernel) + 8.097s (userspace) = 15.863s. May 17 00:24:44.913783 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:24:44.947699 sshd[2348]: Accepted publickey for core from 147.75.109.163 port 34520 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:24:44.949246 sshd[2348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:44.954408 systemd-logind[2074]: New session 3 of user core. May 17 00:24:44.961954 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:24:45.082423 sshd[2348]: pam_unix(sshd:session): session closed for user core May 17 00:24:45.085360 systemd[1]: sshd@2-172.31.31.125:22-147.75.109.163:34520.service: Deactivated successfully. May 17 00:24:45.089875 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:24:45.090847 systemd-logind[2074]: Session 3 logged out. Waiting for processes to exit. May 17 00:24:45.091896 systemd-logind[2074]: Removed session 3. May 17 00:24:46.064464 kubelet[2358]: E0517 00:24:46.064391 2358 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:24:46.067514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:24:46.067729 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:24:47.714556 amazon-ssm-agent[2192]: 2025-05-17 00:24:47 INFO [EC2Identity] EC2 registration was successful. May 17 00:24:47.747553 amazon-ssm-agent[2192]: 2025-05-17 00:24:47 INFO [CredentialRefresher] credentialRefresher has started May 17 00:24:47.747553 amazon-ssm-agent[2192]: 2025-05-17 00:24:47 INFO [CredentialRefresher] Starting credentials refresher loop May 17 00:24:47.747553 amazon-ssm-agent[2192]: 2025-05-17 00:24:47 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 17 00:24:47.815293 amazon-ssm-agent[2192]: 2025-05-17 00:24:47 INFO [CredentialRefresher] Next credential rotation will be in 31.50832885353333 minutes May 17 00:24:49.295845 systemd-resolved[1975]: Clock change detected. Flushing caches. May 17 00:24:49.413463 amazon-ssm-agent[2192]: 2025-05-17 00:24:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 17 00:24:49.513945 amazon-ssm-agent[2192]: 2025-05-17 00:24:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2378) started May 17 00:24:49.614501 amazon-ssm-agent[2192]: 2025-05-17 00:24:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 17 00:24:55.762945 systemd[1]: Started sshd@3-172.31.31.125:22-147.75.109.163:40856.service - OpenSSH per-connection server daemon (147.75.109.163:40856). May 17 00:24:55.916842 sshd[2389]: Accepted publickey for core from 147.75.109.163 port 40856 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:24:55.918411 sshd[2389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:55.923596 systemd-logind[2074]: New session 4 of user core. May 17 00:24:55.932083 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:24:56.053473 sshd[2389]: pam_unix(sshd:session): session closed for user core May 17 00:24:56.056366 systemd[1]: sshd@3-172.31.31.125:22-147.75.109.163:40856.service: Deactivated successfully. May 17 00:24:56.059676 systemd-logind[2074]: Session 4 logged out. Waiting for processes to exit. May 17 00:24:56.060252 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:24:56.062079 systemd-logind[2074]: Removed session 4. May 17 00:24:56.081917 systemd[1]: Started sshd@4-172.31.31.125:22-147.75.109.163:40864.service - OpenSSH per-connection server daemon (147.75.109.163:40864). May 17 00:24:56.233790 sshd[2397]: Accepted publickey for core from 147.75.109.163 port 40864 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:24:56.235144 sshd[2397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:56.240187 systemd-logind[2074]: New session 5 of user core. May 17 00:24:56.247937 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:24:56.362841 sshd[2397]: pam_unix(sshd:session): session closed for user core May 17 00:24:56.366306 systemd[1]: sshd@4-172.31.31.125:22-147.75.109.163:40864.service: Deactivated successfully. May 17 00:24:56.369209 systemd-logind[2074]: Session 5 logged out. Waiting for processes to exit. May 17 00:24:56.369772 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:24:56.370814 systemd-logind[2074]: Removed session 5. May 17 00:24:56.388940 systemd[1]: Started sshd@5-172.31.31.125:22-147.75.109.163:40874.service - OpenSSH per-connection server daemon (147.75.109.163:40874). May 17 00:24:56.538503 sshd[2405]: Accepted publickey for core from 147.75.109.163 port 40874 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:24:56.539828 sshd[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:56.545037 systemd-logind[2074]: New session 6 of user core. May 17 00:24:56.547893 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:24:56.668459 sshd[2405]: pam_unix(sshd:session): session closed for user core May 17 00:24:56.671727 systemd[1]: sshd@5-172.31.31.125:22-147.75.109.163:40874.service: Deactivated successfully. May 17 00:24:56.675662 systemd-logind[2074]: Session 6 logged out. Waiting for processes to exit. May 17 00:24:56.675878 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:24:56.677730 systemd-logind[2074]: Removed session 6. May 17 00:24:56.698074 systemd[1]: Started sshd@6-172.31.31.125:22-147.75.109.163:40880.service - OpenSSH per-connection server daemon (147.75.109.163:40880). May 17 00:24:56.767919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:24:56.780303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:24:56.851606 sshd[2413]: Accepted publickey for core from 147.75.109.163 port 40880 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:24:56.852712 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:56.858220 systemd-logind[2074]: New session 7 of user core. May 17 00:24:56.867003 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:24:57.001801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:24:57.005668 (kubelet)[2430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:24:57.022524 sudo[2421]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:24:57.023035 sudo[2421]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:24:57.038636 sudo[2421]: pam_unix(sudo:session): session closed for user root May 17 00:24:57.063526 kubelet[2430]: E0517 00:24:57.063439 2430 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:24:57.064140 sshd[2413]: pam_unix(sshd:session): session closed for user core May 17 00:24:57.068808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:24:57.069101 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:24:57.073012 systemd[1]: sshd@6-172.31.31.125:22-147.75.109.163:40880.service: Deactivated successfully. May 17 00:24:57.077367 systemd-logind[2074]: Session 7 logged out. Waiting for processes to exit. May 17 00:24:57.077878 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:24:57.079680 systemd-logind[2074]: Removed session 7. May 17 00:24:57.093004 systemd[1]: Started sshd@7-172.31.31.125:22-147.75.109.163:40888.service - OpenSSH per-connection server daemon (147.75.109.163:40888). May 17 00:24:57.243141 sshd[2442]: Accepted publickey for core from 147.75.109.163 port 40888 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:24:57.244690 sshd[2442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:57.250255 systemd-logind[2074]: New session 8 of user core. May 17 00:24:57.256980 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:24:57.355704 sudo[2447]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:24:57.356126 sudo[2447]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:24:57.360525 sudo[2447]: pam_unix(sudo:session): session closed for user root May 17 00:24:57.366181 sudo[2446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:24:57.366603 sudo[2446]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:24:57.381000 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:24:57.383533 auditctl[2450]: No rules May 17 00:24:57.384019 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:24:57.384352 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:24:57.395646 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:24:57.422229 augenrules[2469]: No rules May 17 00:24:57.424007 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:24:57.427121 sudo[2446]: pam_unix(sudo:session): session closed for user root May 17 00:24:57.450672 sshd[2442]: pam_unix(sshd:session): session closed for user core May 17 00:24:57.456503 systemd[1]: sshd@7-172.31.31.125:22-147.75.109.163:40888.service: Deactivated successfully. May 17 00:24:57.460207 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:24:57.461180 systemd-logind[2074]: Session 8 logged out. Waiting for processes to exit. May 17 00:24:57.462265 systemd-logind[2074]: Removed session 8. May 17 00:24:57.487021 systemd[1]: Started sshd@8-172.31.31.125:22-147.75.109.163:40892.service - OpenSSH per-connection server daemon (147.75.109.163:40892). May 17 00:24:57.641191 sshd[2478]: Accepted publickey for core from 147.75.109.163 port 40892 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:24:57.642875 sshd[2478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:57.648435 systemd-logind[2074]: New session 9 of user core. May 17 00:24:57.653972 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:24:57.752758 sudo[2482]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:24:57.753056 sudo[2482]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:24:58.315280 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:24:58.315296 (dockerd)[2498]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:24:58.937865 dockerd[2498]: time="2025-05-17T00:24:58.937797541Z" level=info msg="Starting up" May 17 00:24:59.247468 dockerd[2498]: time="2025-05-17T00:24:59.247338867Z" level=info msg="Loading containers: start." May 17 00:24:59.383617 kernel: Initializing XFRM netlink socket May 17 00:24:59.412680 (udev-worker)[2522]: Network interface NamePolicy= disabled on kernel command line. May 17 00:24:59.482547 systemd-networkd[1648]: docker0: Link UP May 17 00:24:59.511753 dockerd[2498]: time="2025-05-17T00:24:59.511620651Z" level=info msg="Loading containers: done." May 17 00:24:59.548500 dockerd[2498]: time="2025-05-17T00:24:59.548440708Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:24:59.548999 dockerd[2498]: time="2025-05-17T00:24:59.548575163Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:24:59.548999 dockerd[2498]: time="2025-05-17T00:24:59.548737479Z" level=info msg="Daemon has completed initialization" May 17 00:24:59.584829 dockerd[2498]: time="2025-05-17T00:24:59.584162850Z" level=info msg="API listen on /run/docker.sock" May 17 00:24:59.584488 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:25:00.854358 containerd[2119]: time="2025-05-17T00:25:00.854316216Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:25:01.489806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2690625374.mount: Deactivated successfully. May 17 00:25:03.787192 containerd[2119]: time="2025-05-17T00:25:03.787125175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:03.788616 containerd[2119]: time="2025-05-17T00:25:03.788550881Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 17 00:25:03.789752 containerd[2119]: time="2025-05-17T00:25:03.789473543Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:03.797772 containerd[2119]: time="2025-05-17T00:25:03.797669409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:03.799617 containerd[2119]: time="2025-05-17T00:25:03.799472697Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 2.945109028s" May 17 00:25:03.799617 containerd[2119]: time="2025-05-17T00:25:03.799540248Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:25:03.800220 containerd[2119]: time="2025-05-17T00:25:03.800152278Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:25:05.695603 containerd[2119]: time="2025-05-17T00:25:05.695540338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:05.697758 containerd[2119]: time="2025-05-17T00:25:05.697702300Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 17 00:25:05.700154 containerd[2119]: time="2025-05-17T00:25:05.700089682Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:05.704016 containerd[2119]: time="2025-05-17T00:25:05.703978069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:05.705406 containerd[2119]: time="2025-05-17T00:25:05.704985881Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.90479988s" May 17 00:25:05.705406 containerd[2119]: time="2025-05-17T00:25:05.705023622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:25:05.705637 containerd[2119]: time="2025-05-17T00:25:05.705579501Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:25:07.267946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:25:07.277513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:07.398931 containerd[2119]: time="2025-05-17T00:25:07.398882015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:07.400888 containerd[2119]: time="2025-05-17T00:25:07.400757486Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 17 00:25:07.406907 containerd[2119]: time="2025-05-17T00:25:07.406440638Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:07.410151 containerd[2119]: time="2025-05-17T00:25:07.410114724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:07.411749 containerd[2119]: time="2025-05-17T00:25:07.411260115Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.705631039s" May 17 00:25:07.411878 containerd[2119]: time="2025-05-17T00:25:07.411864405Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:25:07.412525 containerd[2119]: time="2025-05-17T00:25:07.412432066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:25:07.518782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:07.523382 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:25:07.571849 kubelet[2712]: E0517 00:25:07.571795 2712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:25:07.575812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:25:07.576052 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:25:08.502942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4189324940.mount: Deactivated successfully. May 17 00:25:09.054750 containerd[2119]: time="2025-05-17T00:25:09.054682996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:09.055806 containerd[2119]: time="2025-05-17T00:25:09.055571907Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 17 00:25:09.058620 containerd[2119]: time="2025-05-17T00:25:09.057240176Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:09.059866 containerd[2119]: time="2025-05-17T00:25:09.059823892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:09.060754 containerd[2119]: time="2025-05-17T00:25:09.060715777Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.648044768s" May 17 00:25:09.060904 containerd[2119]: time="2025-05-17T00:25:09.060881565Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:25:09.061552 containerd[2119]: time="2025-05-17T00:25:09.061442744Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:25:09.580459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567875817.mount: Deactivated successfully. May 17 00:25:10.600233 containerd[2119]: time="2025-05-17T00:25:10.600182281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:10.611137 containerd[2119]: time="2025-05-17T00:25:10.611070839Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:25:10.617209 containerd[2119]: time="2025-05-17T00:25:10.616997582Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:10.621813 containerd[2119]: time="2025-05-17T00:25:10.621767732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:10.623277 containerd[2119]: time="2025-05-17T00:25:10.623221523Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.561594476s" May 17 00:25:10.623277 containerd[2119]: time="2025-05-17T00:25:10.623261812Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:25:10.623989 containerd[2119]: time="2025-05-17T00:25:10.623953785Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:25:11.114547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2034079300.mount: Deactivated successfully. May 17 00:25:11.128260 containerd[2119]: time="2025-05-17T00:25:11.128189138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:11.130245 containerd[2119]: time="2025-05-17T00:25:11.129981574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:25:11.133360 containerd[2119]: time="2025-05-17T00:25:11.132349360Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:11.136249 containerd[2119]: time="2025-05-17T00:25:11.136193692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:11.137469 containerd[2119]: time="2025-05-17T00:25:11.136812305Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 512.820118ms" May 17 00:25:11.137469 containerd[2119]: time="2025-05-17T00:25:11.136845965Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:25:11.137827 containerd[2119]: time="2025-05-17T00:25:11.137804368Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:25:11.716030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472604133.mount: Deactivated successfully. May 17 00:25:13.453090 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:25:13.727664 containerd[2119]: time="2025-05-17T00:25:13.727516844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:13.728958 containerd[2119]: time="2025-05-17T00:25:13.728780454Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 17 00:25:13.730106 containerd[2119]: time="2025-05-17T00:25:13.730021852Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:13.733971 containerd[2119]: time="2025-05-17T00:25:13.733380232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:13.736032 containerd[2119]: time="2025-05-17T00:25:13.734898797Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.596969796s" May 17 00:25:13.736032 containerd[2119]: time="2025-05-17T00:25:13.734944767Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:25:16.406114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:16.413905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:16.450266 systemd[1]: Reloading requested from client PID 2870 ('systemctl') (unit session-9.scope)... May 17 00:25:16.450285 systemd[1]: Reloading... May 17 00:25:16.557613 zram_generator::config[2913]: No configuration found. May 17 00:25:16.685661 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:25:16.769127 systemd[1]: Reloading finished in 318 ms. May 17 00:25:16.806152 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:25:16.806228 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:25:16.806484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:16.808813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:17.318809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:17.326553 (kubelet)[2982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:25:17.390060 kubelet[2982]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:25:17.390671 kubelet[2982]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:25:17.390671 kubelet[2982]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:25:17.396801 kubelet[2982]: I0517 00:25:17.396209 2982 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:25:17.579806 kubelet[2982]: I0517 00:25:17.579676 2982 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:25:17.579806 kubelet[2982]: I0517 00:25:17.579718 2982 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:25:17.580525 kubelet[2982]: I0517 00:25:17.580493 2982 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:25:17.619872 kubelet[2982]: I0517 00:25:17.619812 2982 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:25:17.623000 kubelet[2982]: E0517 00:25:17.622411 2982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:17.630027 kubelet[2982]: E0517 00:25:17.629988 2982 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:25:17.630027 kubelet[2982]: I0517 00:25:17.630019 2982 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:25:17.634558 kubelet[2982]: I0517 00:25:17.634196 2982 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:25:17.637354 kubelet[2982]: I0517 00:25:17.637309 2982 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:25:17.637561 kubelet[2982]: I0517 00:25:17.637494 2982 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:25:17.637737 kubelet[2982]: I0517 00:25:17.637534 2982 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:25:17.637865 kubelet[2982]: I0517 00:25:17.637743 2982 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:25:17.637865 kubelet[2982]: I0517 00:25:17.637755 2982 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:25:17.637865 kubelet[2982]: I0517 00:25:17.637854 2982 state_mem.go:36] "Initialized new in-memory state store" May 17 00:25:17.645034 kubelet[2982]: I0517 00:25:17.644758 2982 kubelet.go:408] "Attempting to sync node with API server" May 17 00:25:17.645034 kubelet[2982]: I0517 00:25:17.644804 2982 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:25:17.645034 kubelet[2982]: I0517 00:25:17.644841 2982 kubelet.go:314] "Adding apiserver pod source" May 17 00:25:17.645034 kubelet[2982]: I0517 00:25:17.644858 2982 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:25:17.647048 kubelet[2982]: W0517 00:25:17.646973 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-125&limit=500&resourceVersion=0": dial tcp 172.31.31.125:6443: connect: connection refused May 17 00:25:17.647173 kubelet[2982]: E0517 00:25:17.647057 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-125&limit=500&resourceVersion=0\": dial tcp 172.31.31.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:17.648838 kubelet[2982]: W0517 00:25:17.648326 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.125:6443: connect: connection refused May 17 00:25:17.648838 kubelet[2982]: E0517 00:25:17.648385 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:17.649261 kubelet[2982]: I0517 00:25:17.649230 2982 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:25:17.654663 kubelet[2982]: I0517 00:25:17.654641 2982 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:25:17.656349 kubelet[2982]: W0517 00:25:17.656304 2982 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:25:17.657105 kubelet[2982]: I0517 00:25:17.656926 2982 server.go:1274] "Started kubelet" May 17 00:25:17.657307 kubelet[2982]: I0517 00:25:17.657212 2982 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:25:17.658350 kubelet[2982]: I0517 00:25:17.658098 2982 server.go:449] "Adding debug handlers to kubelet server" May 17 00:25:17.662079 kubelet[2982]: I0517 00:25:17.662042 2982 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:25:17.667867 kubelet[2982]: I0517 00:25:17.667596 2982 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:25:17.667867 kubelet[2982]: I0517 00:25:17.667807 2982 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:25:17.670656 kubelet[2982]: E0517 00:25:17.667976 2982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.125:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.125:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-125.184028cd4db57a8d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-125,UID:ip-172-31-31-125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-125,},FirstTimestamp:2025-05-17 00:25:17.656898189 +0000 UTC m=+0.325876363,LastTimestamp:2025-05-17 00:25:17.656898189 +0000 UTC m=+0.325876363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-125,}" May 17 00:25:17.670656 kubelet[2982]: I0517 00:25:17.670180 2982 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:25:17.672328 kubelet[2982]: I0517 00:25:17.672186 2982 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:25:17.672445 kubelet[2982]: E0517 00:25:17.672430 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-125\" not found" May 17 00:25:17.676472 kubelet[2982]: E0517 00:25:17.676437 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-125?timeout=10s\": dial tcp 172.31.31.125:6443: connect: connection refused" interval="200ms" May 17 00:25:17.678299 kubelet[2982]: I0517 00:25:17.677907 2982 reconciler.go:26] "Reconciler: start to sync state" May 17 00:25:17.679550 kubelet[2982]: I0517 00:25:17.679534 2982 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:25:17.680103 kubelet[2982]: W0517 00:25:17.680027 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.125:6443: connect: connection refused May 17 00:25:17.680103 kubelet[2982]: E0517 00:25:17.680072 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:17.691072 kubelet[2982]: I0517 00:25:17.689353 2982 factory.go:221] Registration of the systemd container factory successfully May 17 00:25:17.691072 kubelet[2982]: I0517 00:25:17.689468 2982 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:25:17.693756 kubelet[2982]: I0517 00:25:17.693735 2982 factory.go:221] Registration of the containerd container factory successfully May 17 00:25:17.704139 kubelet[2982]: E0517 00:25:17.694296 2982 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:25:17.704243 kubelet[2982]: I0517 00:25:17.704183 2982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:25:17.707851 kubelet[2982]: I0517 00:25:17.707815 2982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:25:17.707851 kubelet[2982]: I0517 00:25:17.707846 2982 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:25:17.707851 kubelet[2982]: I0517 00:25:17.707867 2982 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:25:17.708020 kubelet[2982]: E0517 00:25:17.707903 2982 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:25:17.713533 kubelet[2982]: W0517 00:25:17.713479 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.125:6443: connect: connection refused May 17 00:25:17.713960 kubelet[2982]: E0517 00:25:17.713544 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:17.733554 kubelet[2982]: I0517 00:25:17.733517 2982 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:25:17.733554 kubelet[2982]: I0517 00:25:17.733534 2982 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:25:17.733554 kubelet[2982]: I0517 00:25:17.733550 2982 state_mem.go:36] "Initialized new in-memory state store" May 17 00:25:17.739603 kubelet[2982]: I0517 00:25:17.739555 2982 policy_none.go:49] "None policy: Start" May 17 00:25:17.740279 kubelet[2982]: I0517 00:25:17.740262 2982 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:25:17.740279 kubelet[2982]: I0517 00:25:17.740283 2982 state_mem.go:35] "Initializing new in-memory state store" May 17 00:25:17.748371 kubelet[2982]: I0517 00:25:17.748107 2982 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:25:17.748371 kubelet[2982]: I0517 00:25:17.748278 2982 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:25:17.748371 kubelet[2982]: I0517 00:25:17.748288 2982 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:25:17.749807 kubelet[2982]: I0517 00:25:17.749789 2982 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:25:17.752027 kubelet[2982]: E0517 00:25:17.751950 2982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-125\" not found" May 17 00:25:17.850630 kubelet[2982]: I0517 00:25:17.849817 2982 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-125" May 17 00:25:17.850630 kubelet[2982]: E0517 00:25:17.850124 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.125:6443/api/v1/nodes\": dial tcp 172.31.31.125:6443: connect: connection refused" node="ip-172-31-31-125" May 17 00:25:17.877340 kubelet[2982]: E0517 00:25:17.877296 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-125?timeout=10s\": dial tcp 172.31.31.125:6443: connect: connection refused" interval="400ms" May 17 00:25:17.980039 kubelet[2982]: I0517 00:25:17.979996 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48b3c8cebd061a8d936bfc93163ae762-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-125\" (UID: \"48b3c8cebd061a8d936bfc93163ae762\") " pod="kube-system/kube-scheduler-ip-172-31-31-125" May 17 00:25:17.980039 kubelet[2982]: I0517 00:25:17.980039 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ea56fb6b100c439a144541c394a0149-ca-certs\") pod \"kube-apiserver-ip-172-31-31-125\" (UID: \"9ea56fb6b100c439a144541c394a0149\") " pod="kube-system/kube-apiserver-ip-172-31-31-125" May 17 00:25:17.980039 kubelet[2982]: I0517 00:25:17.980060 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f9740ec71bcdeb5ea41cc196d2d2b41-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-125\" (UID: \"2f9740ec71bcdeb5ea41cc196d2d2b41\") " pod="kube-system/kube-controller-manager-ip-172-31-31-125" May 17 00:25:17.980246 kubelet[2982]: I0517 00:25:17.980078 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f9740ec71bcdeb5ea41cc196d2d2b41-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-125\" (UID: \"2f9740ec71bcdeb5ea41cc196d2d2b41\") " pod="kube-system/kube-controller-manager-ip-172-31-31-125" May 17 00:25:17.980246 kubelet[2982]: I0517 00:25:17.980096 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ea56fb6b100c439a144541c394a0149-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-125\" (UID: \"9ea56fb6b100c439a144541c394a0149\") " pod="kube-system/kube-apiserver-ip-172-31-31-125" May 17 00:25:17.980246 kubelet[2982]: I0517 00:25:17.980113 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ea56fb6b100c439a144541c394a0149-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-125\" (UID: \"9ea56fb6b100c439a144541c394a0149\") " pod="kube-system/kube-apiserver-ip-172-31-31-125" May 17 00:25:17.980246 kubelet[2982]: I0517 00:25:17.980139 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f9740ec71bcdeb5ea41cc196d2d2b41-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-125\" (UID: \"2f9740ec71bcdeb5ea41cc196d2d2b41\") " pod="kube-system/kube-controller-manager-ip-172-31-31-125" May 17 00:25:17.980246 kubelet[2982]: I0517 00:25:17.980153 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f9740ec71bcdeb5ea41cc196d2d2b41-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-125\" (UID: \"2f9740ec71bcdeb5ea41cc196d2d2b41\") " pod="kube-system/kube-controller-manager-ip-172-31-31-125" May 17 00:25:17.980367 kubelet[2982]: I0517 00:25:17.980174 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f9740ec71bcdeb5ea41cc196d2d2b41-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-125\" (UID: \"2f9740ec71bcdeb5ea41cc196d2d2b41\") " pod="kube-system/kube-controller-manager-ip-172-31-31-125" May 17 00:25:18.052638 kubelet[2982]: I0517 00:25:18.052607 2982 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-125" May 17 00:25:18.052996 kubelet[2982]: E0517 00:25:18.052958 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.125:6443/api/v1/nodes\": dial tcp 172.31.31.125:6443: connect: connection refused" node="ip-172-31-31-125" May 17 00:25:18.115953 containerd[2119]: time="2025-05-17T00:25:18.115730900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-125,Uid:9ea56fb6b100c439a144541c394a0149,Namespace:kube-system,Attempt:0,}" May 17 00:25:18.115953 containerd[2119]: time="2025-05-17T00:25:18.115771657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-125,Uid:2f9740ec71bcdeb5ea41cc196d2d2b41,Namespace:kube-system,Attempt:0,}" May 17 00:25:18.120167 containerd[2119]: time="2025-05-17T00:25:18.120128303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-125,Uid:48b3c8cebd061a8d936bfc93163ae762,Namespace:kube-system,Attempt:0,}" May 17 00:25:18.277858 kubelet[2982]: E0517 00:25:18.277794 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-125?timeout=10s\": dial tcp 172.31.31.125:6443: connect: connection refused" interval="800ms" May 17 00:25:18.455497 kubelet[2982]: I0517 00:25:18.455397 2982 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-125" May 17 00:25:18.455891 kubelet[2982]: E0517 00:25:18.455707 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.125:6443/api/v1/nodes\": dial tcp 172.31.31.125:6443: connect: connection refused" node="ip-172-31-31-125" May 17 00:25:18.625872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935314052.mount: Deactivated successfully. May 17 00:25:18.643023 containerd[2119]: time="2025-05-17T00:25:18.642953296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:25:18.645243 containerd[2119]: time="2025-05-17T00:25:18.645189446Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:25:18.647084 containerd[2119]: time="2025-05-17T00:25:18.647035442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:25:18.649192 containerd[2119]: time="2025-05-17T00:25:18.649139856Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:25:18.651681 containerd[2119]: time="2025-05-17T00:25:18.651619009Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:25:18.654454 containerd[2119]: time="2025-05-17T00:25:18.654405444Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:25:18.655366 containerd[2119]: time="2025-05-17T00:25:18.655200900Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:25:18.657380 containerd[2119]: time="2025-05-17T00:25:18.657310234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:25:18.659603 containerd[2119]: time="2025-05-17T00:25:18.658048614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 537.851632ms" May 17 00:25:18.660341 containerd[2119]: time="2025-05-17T00:25:18.660298358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.496534ms" May 17 00:25:18.671539 containerd[2119]: time="2025-05-17T00:25:18.671488325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.659861ms" May 17 00:25:18.747385 kubelet[2982]: W0517 00:25:18.746887 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.125:6443: connect: connection refused May 17 00:25:18.747385 kubelet[2982]: E0517 00:25:18.746941 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:18.814556 kubelet[2982]: W0517 00:25:18.814416 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-125&limit=500&resourceVersion=0": dial tcp 172.31.31.125:6443: connect: connection refused May 17 00:25:18.814556 kubelet[2982]: E0517 00:25:18.814505 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-125&limit=500&resourceVersion=0\": dial tcp 172.31.31.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:18.855662 containerd[2119]: time="2025-05-17T00:25:18.855284049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:18.855662 containerd[2119]: time="2025-05-17T00:25:18.855426325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:18.855662 containerd[2119]: time="2025-05-17T00:25:18.855442671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:18.856285 containerd[2119]: time="2025-05-17T00:25:18.855718642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:18.865895 containerd[2119]: time="2025-05-17T00:25:18.865630602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:18.865895 containerd[2119]: time="2025-05-17T00:25:18.865692388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:18.865895 containerd[2119]: time="2025-05-17T00:25:18.865705038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:18.865895 containerd[2119]: time="2025-05-17T00:25:18.865780677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:18.871772 containerd[2119]: time="2025-05-17T00:25:18.871111338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:18.872062 containerd[2119]: time="2025-05-17T00:25:18.871977816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:18.872654 containerd[2119]: time="2025-05-17T00:25:18.872034908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:18.874242 containerd[2119]: time="2025-05-17T00:25:18.874188662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:19.006993 containerd[2119]: time="2025-05-17T00:25:19.006677197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-125,Uid:48b3c8cebd061a8d936bfc93163ae762,Namespace:kube-system,Attempt:0,} returns sandbox id \"31d39baced9c8d36a4b980976dfc6ae0da331ca1499e04332dee70e11e577100\"" May 17 00:25:19.009637 containerd[2119]: time="2025-05-17T00:25:19.009311045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-125,Uid:9ea56fb6b100c439a144541c394a0149,Namespace:kube-system,Attempt:0,} returns sandbox id \"c81effab71b7b6e820fa70580397e7d1957e13d559fa9338db2a054786a09745\"" May 17 00:25:19.012183 containerd[2119]: time="2025-05-17T00:25:19.012139396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-125,Uid:2f9740ec71bcdeb5ea41cc196d2d2b41,Namespace:kube-system,Attempt:0,} returns sandbox id \"abbb044e12017a0daef52eb470801ffc65664c9897124462e9ae52c1603e12cd\"" May 17 00:25:19.022381 containerd[2119]: time="2025-05-17T00:25:19.022338402Z" level=info msg="CreateContainer within sandbox \"31d39baced9c8d36a4b980976dfc6ae0da331ca1499e04332dee70e11e577100\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:25:19.024610 containerd[2119]: time="2025-05-17T00:25:19.023079770Z" level=info msg="CreateContainer within sandbox \"abbb044e12017a0daef52eb470801ffc65664c9897124462e9ae52c1603e12cd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:25:19.025137 containerd[2119]: time="2025-05-17T00:25:19.024835528Z" level=info msg="CreateContainer within sandbox \"c81effab71b7b6e820fa70580397e7d1957e13d559fa9338db2a054786a09745\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:25:19.075595 containerd[2119]: time="2025-05-17T00:25:19.075533246Z" level=info msg="CreateContainer within sandbox \"31d39baced9c8d36a4b980976dfc6ae0da331ca1499e04332dee70e11e577100\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"40605d88f4b54d50524775ede31df9e755b7182370ca5e56b09861c6028bbee9\"" May 17 00:25:19.076707 containerd[2119]: time="2025-05-17T00:25:19.076667637Z" level=info msg="CreateContainer within sandbox \"c81effab71b7b6e820fa70580397e7d1957e13d559fa9338db2a054786a09745\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"16fed73d9c33ec52d54da8a1fc6e74945d3ef9dfd6bd68e6ecae4f75486c96f7\"" May 17 00:25:19.077017 containerd[2119]: time="2025-05-17T00:25:19.076951532Z" level=info msg="StartContainer for \"40605d88f4b54d50524775ede31df9e755b7182370ca5e56b09861c6028bbee9\"" May 17 00:25:19.078726 kubelet[2982]: E0517 00:25:19.078675 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-125?timeout=10s\": dial tcp 172.31.31.125:6443: connect: connection refused" interval="1.6s" May 17 00:25:19.082307 containerd[2119]: time="2025-05-17T00:25:19.081165293Z" level=info msg="CreateContainer within sandbox \"abbb044e12017a0daef52eb470801ffc65664c9897124462e9ae52c1603e12cd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f8e4db560ef507652553ba59610b3fba56104601a16cf1636ce379c9c7a57109\"" May 17 00:25:19.082307 containerd[2119]: time="2025-05-17T00:25:19.081358206Z" level=info msg="StartContainer for \"16fed73d9c33ec52d54da8a1fc6e74945d3ef9dfd6bd68e6ecae4f75486c96f7\"" May 17 00:25:19.086542 containerd[2119]: time="2025-05-17T00:25:19.086515357Z" level=info msg="StartContainer for \"f8e4db560ef507652553ba59610b3fba56104601a16cf1636ce379c9c7a57109\"" May 17 00:25:19.183304 containerd[2119]: time="2025-05-17T00:25:19.183273024Z" level=info msg="StartContainer for \"16fed73d9c33ec52d54da8a1fc6e74945d3ef9dfd6bd68e6ecae4f75486c96f7\" returns successfully" May 17 00:25:19.206313 containerd[2119]: time="2025-05-17T00:25:19.206275876Z" level=info msg="StartContainer for \"40605d88f4b54d50524775ede31df9e755b7182370ca5e56b09861c6028bbee9\" returns successfully" May 17 00:25:19.206537 containerd[2119]: time="2025-05-17T00:25:19.206302608Z" level=info msg="StartContainer for \"f8e4db560ef507652553ba59610b3fba56104601a16cf1636ce379c9c7a57109\" returns successfully" May 17 00:25:19.210223 kubelet[2982]: W0517 00:25:19.210166 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.125:6443: connect: connection refused May 17 00:25:19.210369 kubelet[2982]: E0517 00:25:19.210232 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:19.218006 kubelet[2982]: W0517 00:25:19.217939 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.125:6443: connect: connection refused May 17 00:25:19.218006 kubelet[2982]: E0517 00:25:19.218009 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:19.260119 kubelet[2982]: I0517 00:25:19.259983 2982 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-125" May 17 00:25:19.260291 kubelet[2982]: E0517 00:25:19.260268 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.125:6443/api/v1/nodes\": dial tcp 172.31.31.125:6443: connect: connection refused" node="ip-172-31-31-125" May 17 00:25:19.796446 kubelet[2982]: E0517 00:25:19.796402 2982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:19.943620 kubelet[2982]: E0517 00:25:19.942969 2982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.125:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.125:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-125.184028cd4db57a8d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-125,UID:ip-172-31-31-125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-125,},FirstTimestamp:2025-05-17 00:25:17.656898189 +0000 UTC m=+0.325876363,LastTimestamp:2025-05-17 00:25:17.656898189 +0000 UTC m=+0.325876363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-125,}" May 17 00:25:20.864032 kubelet[2982]: I0517 00:25:20.863874 2982 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-125" May 17 00:25:22.061035 kubelet[2982]: E0517 00:25:22.060996 2982 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-125\" not found" node="ip-172-31-31-125" May 17 00:25:22.137791 kubelet[2982]: I0517 00:25:22.136433 2982 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-125" May 17 00:25:22.137791 kubelet[2982]: E0517 00:25:22.136473 2982 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-125\": node \"ip-172-31-31-125\" not found" May 17 00:25:22.532515 kubelet[2982]: E0517 00:25:22.532470 2982 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-31-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-125" May 17 00:25:22.650017 kubelet[2982]: I0517 00:25:22.649981 2982 apiserver.go:52] "Watching apiserver" May 17 00:25:22.679977 kubelet[2982]: I0517 00:25:22.679938 2982 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:25:24.296525 systemd[1]: Reloading requested from client PID 3258 ('systemctl') (unit session-9.scope)... May 17 00:25:24.296545 systemd[1]: Reloading... May 17 00:25:24.400617 zram_generator::config[3298]: No configuration found. May 17 00:25:24.551209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:25:24.646463 systemd[1]: Reloading finished in 349 ms. May 17 00:25:24.678728 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:24.691061 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:25:24.691349 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:24.699134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:24.999024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:25.011197 (kubelet)[3368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:25:25.096851 kubelet[3368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:25:25.096851 kubelet[3368]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:25:25.096851 kubelet[3368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:25:25.097339 kubelet[3368]: I0517 00:25:25.096918 3368 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:25:25.104651 kubelet[3368]: I0517 00:25:25.104557 3368 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:25:25.104651 kubelet[3368]: I0517 00:25:25.104609 3368 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:25:25.104875 kubelet[3368]: I0517 00:25:25.104857 3368 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:25:25.108607 kubelet[3368]: I0517 00:25:25.106874 3368 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:25:25.117495 kubelet[3368]: I0517 00:25:25.116910 3368 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:25:25.123944 kubelet[3368]: E0517 00:25:25.123910 3368 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:25:25.124087 kubelet[3368]: I0517 00:25:25.124077 3368 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:25:25.126656 kubelet[3368]: I0517 00:25:25.126637 3368 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:25:25.127096 kubelet[3368]: I0517 00:25:25.127086 3368 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:25:25.127279 kubelet[3368]: I0517 00:25:25.127252 3368 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:25:25.127489 kubelet[3368]: I0517 00:25:25.127331 3368 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:25:25.127630 kubelet[3368]: I0517 00:25:25.127620 3368 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:25:25.127685 kubelet[3368]: I0517 00:25:25.127680 3368 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:25:25.127746 kubelet[3368]: I0517 00:25:25.127741 3368 state_mem.go:36] "Initialized new in-memory state store" May 17 00:25:25.127879 kubelet[3368]: I0517 00:25:25.127872 3368 kubelet.go:408] "Attempting to sync node with API server" May 17 00:25:25.127932 kubelet[3368]: I0517 00:25:25.127927 3368 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:25:25.127995 kubelet[3368]: I0517 00:25:25.127990 3368 kubelet.go:314] "Adding apiserver pod source" May 17 00:25:25.128038 kubelet[3368]: I0517 00:25:25.128033 3368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:25:25.128637 kubelet[3368]: I0517 00:25:25.128618 3368 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:25:25.129039 kubelet[3368]: I0517 00:25:25.129024 3368 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:25:25.131934 kubelet[3368]: I0517 00:25:25.131919 3368 server.go:1274] "Started kubelet" May 17 00:25:25.136220 kubelet[3368]: I0517 00:25:25.136200 3368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:25:25.142152 kubelet[3368]: I0517 00:25:25.142113 3368 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:25:25.143906 kubelet[3368]: I0517 00:25:25.143886 3368 server.go:449] "Adding debug handlers to kubelet server" May 17 00:25:25.151564 kubelet[3368]: I0517 00:25:25.144457 3368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:25:25.151564 kubelet[3368]: I0517 00:25:25.151190 3368 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:25:25.152650 kubelet[3368]: I0517 00:25:25.148039 3368 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:25:25.152650 kubelet[3368]: I0517 00:25:25.145343 3368 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:25:25.152650 kubelet[3368]: I0517 00:25:25.148291 3368 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:25:25.152650 kubelet[3368]: I0517 00:25:25.152418 3368 reconciler.go:26] "Reconciler: start to sync state" May 17 00:25:25.152650 kubelet[3368]: E0517 00:25:25.149210 3368 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-125\" not found" May 17 00:25:25.175708 kubelet[3368]: I0517 00:25:25.175629 3368 factory.go:221] Registration of the systemd container factory successfully May 17 00:25:25.175835 kubelet[3368]: I0517 00:25:25.175726 3368 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:25:25.177399 kubelet[3368]: I0517 00:25:25.177370 3368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:25:25.178360 kubelet[3368]: I0517 00:25:25.178338 3368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:25:25.178429 kubelet[3368]: I0517 00:25:25.178373 3368 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:25:25.178429 kubelet[3368]: I0517 00:25:25.178391 3368 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:25:25.178497 kubelet[3368]: E0517 00:25:25.178427 3368 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:25:25.186953 kubelet[3368]: E0517 00:25:25.186922 3368 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:25:25.189564 kubelet[3368]: I0517 00:25:25.189533 3368 factory.go:221] Registration of the containerd container factory successfully May 17 00:25:25.256901 kubelet[3368]: I0517 00:25:25.256227 3368 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:25:25.256901 kubelet[3368]: I0517 00:25:25.256248 3368 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:25:25.256901 kubelet[3368]: I0517 00:25:25.256269 3368 state_mem.go:36] "Initialized new in-memory state store" May 17 00:25:25.256901 kubelet[3368]: I0517 00:25:25.256442 3368 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:25:25.256901 kubelet[3368]: I0517 00:25:25.256458 3368 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:25:25.256901 kubelet[3368]: I0517 00:25:25.256486 3368 policy_none.go:49] "None policy: Start" May 17 00:25:25.257700 kubelet[3368]: I0517 00:25:25.257683 3368 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:25:25.258647 kubelet[3368]: I0517 00:25:25.257811 3368 state_mem.go:35] "Initializing new in-memory state store" May 17 00:25:25.258647 kubelet[3368]: I0517 00:25:25.258010 3368 state_mem.go:75] "Updated machine memory state" May 17 00:25:25.261778 kubelet[3368]: I0517 00:25:25.261736 3368 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:25:25.261962 kubelet[3368]: I0517 00:25:25.261943 3368 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:25:25.262021 kubelet[3368]: I0517 00:25:25.261961 3368 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:25:25.262371 kubelet[3368]: I0517 00:25:25.262345 3368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:25:25.296374 kubelet[3368]: E0517 00:25:25.296319 3368 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-125\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-125" May 17 00:25:25.371163 kubelet[3368]: I0517 00:25:25.370894 3368 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-125" May 17 00:25:25.379033 kubelet[3368]: I0517 00:25:25.378996 3368 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-31-125" May 17 00:25:25.379175 kubelet[3368]: I0517 00:25:25.379074 3368 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-125" May 17 00:25:25.453889 kubelet[3368]: I0517 00:25:25.453462 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ea56fb6b100c439a144541c394a0149-ca-certs\") pod \"kube-apiserver-ip-172-31-31-125\" (UID: \"9ea56fb6b100c439a144541c394a0149\") " pod="kube-system/kube-apiserver-ip-172-31-31-125" May 17 00:25:25.453889 kubelet[3368]: I0517 00:25:25.453538 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ea56fb6b100c439a144541c394a0149-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-125\" (UID: \"9ea56fb6b100c439a144541c394a0149\") " pod="kube-system/kube-apiserver-ip-172-31-31-125" May 17 00:25:25.453889 kubelet[3368]: I0517 00:25:25.453574 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f9740ec71bcdeb5ea41cc196d2d2b41-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-125\" (UID: \"2f9740ec71bcdeb5ea41cc196d2d2b41\") " pod="kube-system/kube-controller-manager-ip-172-31-31-125" May 17 00:25:25.453889 kubelet[3368]: I0517 00:25:25.453611 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48b3c8cebd061a8d936bfc93163ae762-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-125\" (UID: \"48b3c8cebd061a8d936bfc93163ae762\") " pod="kube-system/kube-scheduler-ip-172-31-31-125" May 17 00:25:25.453889 kubelet[3368]: I0517 00:25:25.453636 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ea56fb6b100c439a144541c394a0149-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-125\" (UID: \"9ea56fb6b100c439a144541c394a0149\") " pod="kube-system/kube-apiserver-ip-172-31-31-125" May 17 00:25:25.454269 kubelet[3368]: I0517 00:25:25.453658 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f9740ec71bcdeb5ea41cc196d2d2b41-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-125\" (UID: \"2f9740ec71bcdeb5ea41cc196d2d2b41\") " pod="kube-system/kube-controller-manager-ip-172-31-31-125" May 17 00:25:25.454269 kubelet[3368]: I0517 00:25:25.453689 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f9740ec71bcdeb5ea41cc196d2d2b41-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-125\" (UID: \"2f9740ec71bcdeb5ea41cc196d2d2b41\") " pod="kube-system/kube-controller-manager-ip-172-31-31-125" May 17 00:25:25.454269 kubelet[3368]: I0517 00:25:25.453712 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f9740ec71bcdeb5ea41cc196d2d2b41-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-125\" (UID: \"2f9740ec71bcdeb5ea41cc196d2d2b41\") " pod="kube-system/kube-controller-manager-ip-172-31-31-125" May 17 00:25:25.454269 kubelet[3368]: I0517 00:25:25.453737 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f9740ec71bcdeb5ea41cc196d2d2b41-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-125\" (UID: \"2f9740ec71bcdeb5ea41cc196d2d2b41\") " pod="kube-system/kube-controller-manager-ip-172-31-31-125" May 17 00:25:26.140969 kubelet[3368]: I0517 00:25:26.140685 3368 apiserver.go:52] "Watching apiserver" May 17 00:25:26.152703 kubelet[3368]: I0517 00:25:26.152615 3368 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:25:26.251678 kubelet[3368]: E0517 00:25:26.251629 3368 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-125\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-125" May 17 00:25:26.262612 kubelet[3368]: I0517 00:25:26.261916 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-125" podStartSLOduration=1.261896101 podStartE2EDuration="1.261896101s" podCreationTimestamp="2025-05-17 00:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:25:26.259972137 +0000 UTC m=+1.241698876" watchObservedRunningTime="2025-05-17 00:25:26.261896101 +0000 UTC m=+1.243622839" May 17 00:25:26.292908 kubelet[3368]: I0517 00:25:26.292625 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-125" podStartSLOduration=1.292605295 podStartE2EDuration="1.292605295s" podCreationTimestamp="2025-05-17 00:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:25:26.275707202 +0000 UTC m=+1.257433941" watchObservedRunningTime="2025-05-17 00:25:26.292605295 +0000 UTC m=+1.274332032" May 17 00:25:26.292908 kubelet[3368]: I0517 00:25:26.292860 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-125" podStartSLOduration=3.2928491810000002 podStartE2EDuration="3.292849181s" podCreationTimestamp="2025-05-17 00:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:25:26.292255182 +0000 UTC m=+1.273981922" watchObservedRunningTime="2025-05-17 00:25:26.292849181 +0000 UTC m=+1.274575919" May 17 00:25:27.710057 update_engine[2081]: I20250517 00:25:27.709980 2081 update_attempter.cc:509] Updating boot flags... May 17 00:25:27.790664 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3422) May 17 00:25:27.932606 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3421) May 17 00:25:28.113769 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3421) May 17 00:25:29.569560 kubelet[3368]: I0517 00:25:29.569528 3368 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:25:29.570397 containerd[2119]: time="2025-05-17T00:25:29.570360729Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:25:29.570910 kubelet[3368]: I0517 00:25:29.570878 3368 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:25:30.597619 kubelet[3368]: I0517 00:25:30.597548 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cvc7\" (UniqueName: \"kubernetes.io/projected/1c7c8753-4d96-41c1-9395-8fca8a364fd5-kube-api-access-8cvc7\") pod \"kube-proxy-b7tqw\" (UID: \"1c7c8753-4d96-41c1-9395-8fca8a364fd5\") " pod="kube-system/kube-proxy-b7tqw" May 17 00:25:30.597619 kubelet[3368]: I0517 00:25:30.597626 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c7c8753-4d96-41c1-9395-8fca8a364fd5-xtables-lock\") pod \"kube-proxy-b7tqw\" (UID: \"1c7c8753-4d96-41c1-9395-8fca8a364fd5\") " pod="kube-system/kube-proxy-b7tqw" May 17 00:25:30.598078 kubelet[3368]: I0517 00:25:30.597647 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c7c8753-4d96-41c1-9395-8fca8a364fd5-lib-modules\") pod \"kube-proxy-b7tqw\" (UID: \"1c7c8753-4d96-41c1-9395-8fca8a364fd5\") " pod="kube-system/kube-proxy-b7tqw" May 17 00:25:30.598078 kubelet[3368]: I0517 00:25:30.597665 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c7c8753-4d96-41c1-9395-8fca8a364fd5-kube-proxy\") pod \"kube-proxy-b7tqw\" (UID: \"1c7c8753-4d96-41c1-9395-8fca8a364fd5\") " pod="kube-system/kube-proxy-b7tqw" May 17 00:25:30.698556 kubelet[3368]: I0517 00:25:30.698497 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63bc499a-deeb-4b34-aa9a-91e46a99b3b1-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-f6pwr\" (UID: \"63bc499a-deeb-4b34-aa9a-91e46a99b3b1\") " pod="tigera-operator/tigera-operator-7c5755cdcb-f6pwr" May 17 00:25:30.698556 kubelet[3368]: I0517 00:25:30.698560 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbs5\" (UniqueName: \"kubernetes.io/projected/63bc499a-deeb-4b34-aa9a-91e46a99b3b1-kube-api-access-gpbs5\") pod \"tigera-operator-7c5755cdcb-f6pwr\" (UID: \"63bc499a-deeb-4b34-aa9a-91e46a99b3b1\") " pod="tigera-operator/tigera-operator-7c5755cdcb-f6pwr" May 17 00:25:30.837258 containerd[2119]: time="2025-05-17T00:25:30.837201457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b7tqw,Uid:1c7c8753-4d96-41c1-9395-8fca8a364fd5,Namespace:kube-system,Attempt:0,}" May 17 00:25:30.870340 containerd[2119]: time="2025-05-17T00:25:30.869853468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:30.870340 containerd[2119]: time="2025-05-17T00:25:30.869929480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:30.870340 containerd[2119]: time="2025-05-17T00:25:30.869952015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:30.870340 containerd[2119]: time="2025-05-17T00:25:30.870136431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:30.922221 containerd[2119]: time="2025-05-17T00:25:30.922176296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b7tqw,Uid:1c7c8753-4d96-41c1-9395-8fca8a364fd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"91f156d54decf11658c61cb876c28e204e338eae206f747999da4af7b947b8eb\"" May 17 00:25:30.926077 containerd[2119]: time="2025-05-17T00:25:30.925800357Z" level=info msg="CreateContainer within sandbox \"91f156d54decf11658c61cb876c28e204e338eae206f747999da4af7b947b8eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:25:30.955564 containerd[2119]: time="2025-05-17T00:25:30.955525093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-f6pwr,Uid:63bc499a-deeb-4b34-aa9a-91e46a99b3b1,Namespace:tigera-operator,Attempt:0,}" May 17 00:25:30.975707 containerd[2119]: time="2025-05-17T00:25:30.975640259Z" level=info msg="CreateContainer within sandbox \"91f156d54decf11658c61cb876c28e204e338eae206f747999da4af7b947b8eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9eac9ee639148eb15dcb282ab44cb4ee0de6ec271deec1cb36447101126b463d\"" May 17 00:25:30.976404 containerd[2119]: time="2025-05-17T00:25:30.976377476Z" level=info msg="StartContainer for \"9eac9ee639148eb15dcb282ab44cb4ee0de6ec271deec1cb36447101126b463d\"" May 17 00:25:31.004688 containerd[2119]: time="2025-05-17T00:25:31.004567424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:31.004688 containerd[2119]: time="2025-05-17T00:25:31.004626374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:31.004688 containerd[2119]: time="2025-05-17T00:25:31.004641028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:31.005512 containerd[2119]: time="2025-05-17T00:25:31.004730260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:31.073883 containerd[2119]: time="2025-05-17T00:25:31.073833734Z" level=info msg="StartContainer for \"9eac9ee639148eb15dcb282ab44cb4ee0de6ec271deec1cb36447101126b463d\" returns successfully" May 17 00:25:31.089378 containerd[2119]: time="2025-05-17T00:25:31.089324748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-f6pwr,Uid:63bc499a-deeb-4b34-aa9a-91e46a99b3b1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bfaa8554fcc731f01b386ee2354ee1180856134d3c6c60595b3ff1fb23ef0ea7\"" May 17 00:25:31.091626 containerd[2119]: time="2025-05-17T00:25:31.091308733Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:25:31.262269 kubelet[3368]: I0517 00:25:31.262098 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b7tqw" podStartSLOduration=1.262080872 podStartE2EDuration="1.262080872s" podCreationTimestamp="2025-05-17 00:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:25:31.2617166 +0000 UTC m=+6.243443337" watchObservedRunningTime="2025-05-17 00:25:31.262080872 +0000 UTC m=+6.243807609" May 17 00:25:32.897799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697608700.mount: Deactivated successfully. May 17 00:25:33.815132 containerd[2119]: time="2025-05-17T00:25:33.815080763Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:33.816205 containerd[2119]: time="2025-05-17T00:25:33.816042879Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:25:33.817202 containerd[2119]: time="2025-05-17T00:25:33.817132137Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:33.819256 containerd[2119]: time="2025-05-17T00:25:33.819201821Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:33.820284 containerd[2119]: time="2025-05-17T00:25:33.819786389Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.728441623s" May 17 00:25:33.820284 containerd[2119]: time="2025-05-17T00:25:33.819820103Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:25:33.834196 containerd[2119]: time="2025-05-17T00:25:33.834154826Z" level=info msg="CreateContainer within sandbox \"bfaa8554fcc731f01b386ee2354ee1180856134d3c6c60595b3ff1fb23ef0ea7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:25:33.846865 containerd[2119]: time="2025-05-17T00:25:33.846813156Z" level=info msg="CreateContainer within sandbox \"bfaa8554fcc731f01b386ee2354ee1180856134d3c6c60595b3ff1fb23ef0ea7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"58031154b57a0820da55b549141a9a2974d99fe16e577cd33dc624ea0c2e3977\"" May 17 00:25:33.848332 containerd[2119]: time="2025-05-17T00:25:33.848045783Z" level=info msg="StartContainer for \"58031154b57a0820da55b549141a9a2974d99fe16e577cd33dc624ea0c2e3977\"" May 17 00:25:33.910507 containerd[2119]: time="2025-05-17T00:25:33.910439828Z" level=info msg="StartContainer for \"58031154b57a0820da55b549141a9a2974d99fe16e577cd33dc624ea0c2e3977\" returns successfully" May 17 00:25:40.908395 sudo[2482]: pam_unix(sudo:session): session closed for user root May 17 00:25:40.937212 sshd[2478]: pam_unix(sshd:session): session closed for user core May 17 00:25:40.955066 systemd-logind[2074]: Session 9 logged out. Waiting for processes to exit. May 17 00:25:40.956080 systemd[1]: sshd@8-172.31.31.125:22-147.75.109.163:40892.service: Deactivated successfully. May 17 00:25:40.968573 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:25:40.974558 systemd-logind[2074]: Removed session 9. May 17 00:25:45.860907 kubelet[3368]: I0517 00:25:45.858744 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-f6pwr" podStartSLOduration=13.123340701 podStartE2EDuration="15.85872725s" podCreationTimestamp="2025-05-17 00:25:30 +0000 UTC" firstStartedPulling="2025-05-17 00:25:31.090709243 +0000 UTC m=+6.072435962" lastFinishedPulling="2025-05-17 00:25:33.826095794 +0000 UTC m=+8.807822511" observedRunningTime="2025-05-17 00:25:34.29619614 +0000 UTC m=+9.277922878" watchObservedRunningTime="2025-05-17 00:25:45.85872725 +0000 UTC m=+20.840453986" May 17 00:25:46.018184 kubelet[3368]: I0517 00:25:46.018080 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/838365c3-3ba6-496d-82c3-b866690fd579-tigera-ca-bundle\") pod \"calico-typha-6d5fd567fb-29pn7\" (UID: \"838365c3-3ba6-496d-82c3-b866690fd579\") " pod="calico-system/calico-typha-6d5fd567fb-29pn7" May 17 00:25:46.018325 kubelet[3368]: I0517 00:25:46.018272 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rjr8\" (UniqueName: \"kubernetes.io/projected/838365c3-3ba6-496d-82c3-b866690fd579-kube-api-access-9rjr8\") pod \"calico-typha-6d5fd567fb-29pn7\" (UID: \"838365c3-3ba6-496d-82c3-b866690fd579\") " pod="calico-system/calico-typha-6d5fd567fb-29pn7" May 17 00:25:46.018325 kubelet[3368]: I0517 00:25:46.018295 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/838365c3-3ba6-496d-82c3-b866690fd579-typha-certs\") pod \"calico-typha-6d5fd567fb-29pn7\" (UID: \"838365c3-3ba6-496d-82c3-b866690fd579\") " pod="calico-system/calico-typha-6d5fd567fb-29pn7" May 17 00:25:46.169119 containerd[2119]: time="2025-05-17T00:25:46.167465702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d5fd567fb-29pn7,Uid:838365c3-3ba6-496d-82c3-b866690fd579,Namespace:calico-system,Attempt:0,}" May 17 00:25:46.239018 containerd[2119]: time="2025-05-17T00:25:46.238893986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:46.240526 containerd[2119]: time="2025-05-17T00:25:46.240037113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:46.240526 containerd[2119]: time="2025-05-17T00:25:46.240194984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:46.240884 containerd[2119]: time="2025-05-17T00:25:46.240507244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:46.321054 kubelet[3368]: I0517 00:25:46.321005 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fe2932d-6f34-4682-975c-6e3633620d41-tigera-ca-bundle\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.321054 kubelet[3368]: I0517 00:25:46.321049 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9fe2932d-6f34-4682-975c-6e3633620d41-flexvol-driver-host\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.321310 kubelet[3368]: I0517 00:25:46.321075 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9fe2932d-6f34-4682-975c-6e3633620d41-var-run-calico\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.321310 kubelet[3368]: I0517 00:25:46.321098 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9fe2932d-6f34-4682-975c-6e3633620d41-var-lib-calico\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.321310 kubelet[3368]: I0517 00:25:46.321122 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fe2932d-6f34-4682-975c-6e3633620d41-lib-modules\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.321310 kubelet[3368]: I0517 00:25:46.321145 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9fe2932d-6f34-4682-975c-6e3633620d41-cni-bin-dir\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.321310 kubelet[3368]: I0517 00:25:46.321168 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9fe2932d-6f34-4682-975c-6e3633620d41-cni-log-dir\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.321617 kubelet[3368]: I0517 00:25:46.321192 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fe2932d-6f34-4682-975c-6e3633620d41-xtables-lock\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.321617 kubelet[3368]: I0517 00:25:46.321216 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7knz\" (UniqueName: \"kubernetes.io/projected/9fe2932d-6f34-4682-975c-6e3633620d41-kube-api-access-f7knz\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.321617 kubelet[3368]: I0517 00:25:46.321238 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9fe2932d-6f34-4682-975c-6e3633620d41-policysync\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.321617 kubelet[3368]: I0517 00:25:46.321267 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9fe2932d-6f34-4682-975c-6e3633620d41-cni-net-dir\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.321617 kubelet[3368]: I0517 00:25:46.321292 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9fe2932d-6f34-4682-975c-6e3633620d41-node-certs\") pod \"calico-node-znq85\" (UID: \"9fe2932d-6f34-4682-975c-6e3633620d41\") " pod="calico-system/calico-node-znq85" May 17 00:25:46.357074 containerd[2119]: time="2025-05-17T00:25:46.356887894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d5fd567fb-29pn7,Uid:838365c3-3ba6-496d-82c3-b866690fd579,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0f9c896651d06158f0e32e15830492bc7344193270529a1c03333226d7ba77b\"" May 17 00:25:46.362443 containerd[2119]: time="2025-05-17T00:25:46.362199633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:25:46.407536 kubelet[3368]: E0517 00:25:46.406677 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhsvr" podUID="9548793e-04a2-4303-8663-86deb887e61f" May 17 00:25:46.441931 kubelet[3368]: E0517 00:25:46.441634 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.441931 kubelet[3368]: W0517 00:25:46.441676 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.441931 kubelet[3368]: E0517 00:25:46.441708 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.444715 kubelet[3368]: E0517 00:25:46.444609 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.444715 kubelet[3368]: W0517 00:25:46.444653 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.444715 kubelet[3368]: E0517 00:25:46.444678 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.495799 containerd[2119]: time="2025-05-17T00:25:46.494974695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-znq85,Uid:9fe2932d-6f34-4682-975c-6e3633620d41,Namespace:calico-system,Attempt:0,}" May 17 00:25:46.523752 kubelet[3368]: E0517 00:25:46.523720 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.523752 kubelet[3368]: W0517 00:25:46.523742 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.523752 kubelet[3368]: E0517 00:25:46.523762 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.524006 kubelet[3368]: I0517 00:25:46.523791 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9548793e-04a2-4303-8663-86deb887e61f-registration-dir\") pod \"csi-node-driver-hhsvr\" (UID: \"9548793e-04a2-4303-8663-86deb887e61f\") " pod="calico-system/csi-node-driver-hhsvr" May 17 00:25:46.524006 kubelet[3368]: E0517 00:25:46.524002 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.524218 kubelet[3368]: W0517 00:25:46.524012 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.524218 kubelet[3368]: E0517 00:25:46.524028 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.524218 kubelet[3368]: I0517 00:25:46.524047 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9548793e-04a2-4303-8663-86deb887e61f-socket-dir\") pod \"csi-node-driver-hhsvr\" (UID: \"9548793e-04a2-4303-8663-86deb887e61f\") " pod="calico-system/csi-node-driver-hhsvr" May 17 00:25:46.524531 kubelet[3368]: E0517 00:25:46.524497 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.524531 kubelet[3368]: W0517 00:25:46.524517 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.524531 kubelet[3368]: E0517 00:25:46.524537 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.524859 kubelet[3368]: E0517 00:25:46.524765 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.524859 kubelet[3368]: W0517 00:25:46.524773 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.524859 kubelet[3368]: E0517 00:25:46.524786 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.525298 kubelet[3368]: E0517 00:25:46.525144 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.525298 kubelet[3368]: W0517 00:25:46.525160 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.525298 kubelet[3368]: E0517 00:25:46.525186 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.525298 kubelet[3368]: I0517 00:25:46.525206 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9548793e-04a2-4303-8663-86deb887e61f-kubelet-dir\") pod \"csi-node-driver-hhsvr\" (UID: \"9548793e-04a2-4303-8663-86deb887e61f\") " pod="calico-system/csi-node-driver-hhsvr" May 17 00:25:46.525474 kubelet[3368]: E0517 00:25:46.525464 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.525549 kubelet[3368]: W0517 00:25:46.525514 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.525737 kubelet[3368]: E0517 00:25:46.525635 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.525737 kubelet[3368]: I0517 00:25:46.525665 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9548793e-04a2-4303-8663-86deb887e61f-varrun\") pod \"csi-node-driver-hhsvr\" (UID: \"9548793e-04a2-4303-8663-86deb887e61f\") " pod="calico-system/csi-node-driver-hhsvr" May 17 00:25:46.525737 kubelet[3368]: E0517 00:25:46.525726 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.525737 kubelet[3368]: W0517 00:25:46.525733 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.525997 kubelet[3368]: E0517 00:25:46.525896 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.525997 kubelet[3368]: W0517 00:25:46.525910 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.525997 kubelet[3368]: E0517 00:25:46.525921 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.525997 kubelet[3368]: E0517 00:25:46.525894 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.526318 kubelet[3368]: E0517 00:25:46.526060 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.526318 kubelet[3368]: W0517 00:25:46.526068 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.526318 kubelet[3368]: E0517 00:25:46.526075 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.527998 kubelet[3368]: E0517 00:25:46.527852 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.527998 kubelet[3368]: W0517 00:25:46.527867 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.527998 kubelet[3368]: E0517 00:25:46.527970 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.528407 kubelet[3368]: E0517 00:25:46.528174 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.528407 kubelet[3368]: W0517 00:25:46.528191 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.528407 kubelet[3368]: E0517 00:25:46.528207 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.528407 kubelet[3368]: I0517 00:25:46.528227 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqw59\" (UniqueName: \"kubernetes.io/projected/9548793e-04a2-4303-8663-86deb887e61f-kube-api-access-hqw59\") pod \"csi-node-driver-hhsvr\" (UID: \"9548793e-04a2-4303-8663-86deb887e61f\") " pod="calico-system/csi-node-driver-hhsvr" May 17 00:25:46.528672 kubelet[3368]: E0517 00:25:46.528422 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.528672 kubelet[3368]: W0517 00:25:46.528430 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.528672 kubelet[3368]: E0517 00:25:46.528451 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.528672 kubelet[3368]: E0517 00:25:46.528624 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.528672 kubelet[3368]: W0517 00:25:46.528631 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.528672 kubelet[3368]: E0517 00:25:46.528644 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.529685 kubelet[3368]: E0517 00:25:46.528785 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.529685 kubelet[3368]: W0517 00:25:46.528791 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.529685 kubelet[3368]: E0517 00:25:46.528798 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.529685 kubelet[3368]: E0517 00:25:46.528951 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.529685 kubelet[3368]: W0517 00:25:46.528958 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.529685 kubelet[3368]: E0517 00:25:46.528965 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.545966 containerd[2119]: time="2025-05-17T00:25:46.545879677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:46.545966 containerd[2119]: time="2025-05-17T00:25:46.545947430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:46.546752 containerd[2119]: time="2025-05-17T00:25:46.546707179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:46.547023 containerd[2119]: time="2025-05-17T00:25:46.546909397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:46.609253 containerd[2119]: time="2025-05-17T00:25:46.609198769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-znq85,Uid:9fe2932d-6f34-4682-975c-6e3633620d41,Namespace:calico-system,Attempt:0,} returns sandbox id \"652cbebf4980dddf48605697cdd1fe13b962a599e9f333b65ee1d717a583ddac\"" May 17 00:25:46.629613 kubelet[3368]: E0517 00:25:46.629528 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.629613 kubelet[3368]: W0517 00:25:46.629552 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.630053 kubelet[3368]: E0517 00:25:46.629576 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.630374 kubelet[3368]: E0517 00:25:46.630272 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.630374 kubelet[3368]: W0517 00:25:46.630302 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.630374 kubelet[3368]: E0517 00:25:46.630324 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.631602 kubelet[3368]: E0517 00:25:46.631476 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.631602 kubelet[3368]: W0517 00:25:46.631494 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.631602 kubelet[3368]: E0517 00:25:46.631510 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.632113 kubelet[3368]: E0517 00:25:46.631981 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.632113 kubelet[3368]: W0517 00:25:46.631995 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.632113 kubelet[3368]: E0517 00:25:46.632027 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.633615 kubelet[3368]: E0517 00:25:46.633554 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.633615 kubelet[3368]: W0517 00:25:46.633572 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.633725 kubelet[3368]: E0517 00:25:46.633619 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.634810 kubelet[3368]: E0517 00:25:46.633872 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.634810 kubelet[3368]: W0517 00:25:46.633900 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.634810 kubelet[3368]: E0517 00:25:46.633914 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.634810 kubelet[3368]: E0517 00:25:46.634130 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.634810 kubelet[3368]: W0517 00:25:46.634142 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.634810 kubelet[3368]: E0517 00:25:46.634155 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.634810 kubelet[3368]: E0517 00:25:46.634424 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.634810 kubelet[3368]: W0517 00:25:46.634437 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.634810 kubelet[3368]: E0517 00:25:46.634449 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.634810 kubelet[3368]: E0517 00:25:46.634715 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.637641 kubelet[3368]: W0517 00:25:46.634725 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.637641 kubelet[3368]: E0517 00:25:46.634738 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.637641 kubelet[3368]: E0517 00:25:46.635001 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.637641 kubelet[3368]: W0517 00:25:46.635012 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.637641 kubelet[3368]: E0517 00:25:46.635095 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.637641 kubelet[3368]: E0517 00:25:46.635243 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.637641 kubelet[3368]: W0517 00:25:46.635252 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.637641 kubelet[3368]: E0517 00:25:46.635351 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.637641 kubelet[3368]: E0517 00:25:46.635483 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.637641 kubelet[3368]: W0517 00:25:46.635492 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.638112 kubelet[3368]: E0517 00:25:46.635593 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.638112 kubelet[3368]: E0517 00:25:46.635737 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.638112 kubelet[3368]: W0517 00:25:46.635746 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.638112 kubelet[3368]: E0517 00:25:46.635942 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.638112 kubelet[3368]: W0517 00:25:46.635951 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.638112 kubelet[3368]: E0517 00:25:46.635964 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.638112 kubelet[3368]: E0517 00:25:46.636144 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.638112 kubelet[3368]: W0517 00:25:46.636157 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.638112 kubelet[3368]: E0517 00:25:46.636169 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.638112 kubelet[3368]: E0517 00:25:46.636396 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.638500 kubelet[3368]: W0517 00:25:46.636406 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.638500 kubelet[3368]: E0517 00:25:46.636412 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.638500 kubelet[3368]: E0517 00:25:46.636417 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.638500 kubelet[3368]: E0517 00:25:46.636622 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.638500 kubelet[3368]: W0517 00:25:46.636632 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.638500 kubelet[3368]: E0517 00:25:46.636654 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.638500 kubelet[3368]: E0517 00:25:46.636872 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.638500 kubelet[3368]: W0517 00:25:46.636883 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.638500 kubelet[3368]: E0517 00:25:46.636965 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.638500 kubelet[3368]: E0517 00:25:46.637161 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.643258 kubelet[3368]: W0517 00:25:46.637174 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.643258 kubelet[3368]: E0517 00:25:46.637368 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.643258 kubelet[3368]: W0517 00:25:46.637379 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.643258 kubelet[3368]: E0517 00:25:46.637392 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.643258 kubelet[3368]: E0517 00:25:46.637602 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.643258 kubelet[3368]: W0517 00:25:46.637614 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.643258 kubelet[3368]: E0517 00:25:46.637626 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.643258 kubelet[3368]: E0517 00:25:46.637869 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.643258 kubelet[3368]: E0517 00:25:46.638019 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.643258 kubelet[3368]: W0517 00:25:46.638029 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.643686 kubelet[3368]: E0517 00:25:46.638045 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.643686 kubelet[3368]: E0517 00:25:46.638480 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.643686 kubelet[3368]: W0517 00:25:46.638490 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.643686 kubelet[3368]: E0517 00:25:46.638504 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.643686 kubelet[3368]: E0517 00:25:46.638769 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.643686 kubelet[3368]: W0517 00:25:46.638780 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.643686 kubelet[3368]: E0517 00:25:46.638794 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.643686 kubelet[3368]: E0517 00:25:46.639094 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.643686 kubelet[3368]: W0517 00:25:46.639104 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.643686 kubelet[3368]: E0517 00:25:46.639117 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:46.655860 kubelet[3368]: E0517 00:25:46.655775 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:46.655860 kubelet[3368]: W0517 00:25:46.655796 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:46.655860 kubelet[3368]: E0517 00:25:46.655818 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:47.867209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882429494.mount: Deactivated successfully. May 17 00:25:48.178932 kubelet[3368]: E0517 00:25:48.178664 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhsvr" podUID="9548793e-04a2-4303-8663-86deb887e61f" May 17 00:25:48.897469 containerd[2119]: time="2025-05-17T00:25:48.897415983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:48.898661 containerd[2119]: time="2025-05-17T00:25:48.898508855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:25:48.921134 containerd[2119]: time="2025-05-17T00:25:48.920987281Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:48.942863 containerd[2119]: time="2025-05-17T00:25:48.942809288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:48.944025 containerd[2119]: time="2025-05-17T00:25:48.943985250Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.581740919s" May 17 00:25:48.944142 containerd[2119]: time="2025-05-17T00:25:48.944032961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:25:48.948304 containerd[2119]: time="2025-05-17T00:25:48.948270064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:25:48.976928 containerd[2119]: time="2025-05-17T00:25:48.976883930Z" level=info msg="CreateContainer within sandbox \"c0f9c896651d06158f0e32e15830492bc7344193270529a1c03333226d7ba77b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:25:49.035507 containerd[2119]: time="2025-05-17T00:25:49.033864803Z" level=info msg="CreateContainer within sandbox \"c0f9c896651d06158f0e32e15830492bc7344193270529a1c03333226d7ba77b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ea90257450c08bdeac1170ee4cb13080cdf12e032f723769b7eeef4e63275a05\"" May 17 00:25:49.036704 containerd[2119]: time="2025-05-17T00:25:49.036247459Z" level=info msg="StartContainer for \"ea90257450c08bdeac1170ee4cb13080cdf12e032f723769b7eeef4e63275a05\"" May 17 00:25:49.234279 containerd[2119]: time="2025-05-17T00:25:49.233332523Z" level=info msg="StartContainer for \"ea90257450c08bdeac1170ee4cb13080cdf12e032f723769b7eeef4e63275a05\" returns successfully" May 17 00:25:49.455008 kubelet[3368]: E0517 00:25:49.454960 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.455008 kubelet[3368]: W0517 00:25:49.455004 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.455628 kubelet[3368]: E0517 00:25:49.455032 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.455844 kubelet[3368]: E0517 00:25:49.455825 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.455914 kubelet[3368]: W0517 00:25:49.455845 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.455914 kubelet[3368]: E0517 00:25:49.455880 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.456906 kubelet[3368]: E0517 00:25:49.456886 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.456906 kubelet[3368]: W0517 00:25:49.456904 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.457079 kubelet[3368]: E0517 00:25:49.456920 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.458794 kubelet[3368]: E0517 00:25:49.457772 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.458794 kubelet[3368]: W0517 00:25:49.457802 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.458794 kubelet[3368]: E0517 00:25:49.457818 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.459168 kubelet[3368]: E0517 00:25:49.459152 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.459257 kubelet[3368]: W0517 00:25:49.459168 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.459257 kubelet[3368]: E0517 00:25:49.459183 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.460913 kubelet[3368]: E0517 00:25:49.460859 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.460913 kubelet[3368]: W0517 00:25:49.460911 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.461075 kubelet[3368]: E0517 00:25:49.460932 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.461909 kubelet[3368]: E0517 00:25:49.461888 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.461909 kubelet[3368]: W0517 00:25:49.461908 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.462046 kubelet[3368]: E0517 00:25:49.461939 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.467610 kubelet[3368]: E0517 00:25:49.465733 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.467610 kubelet[3368]: W0517 00:25:49.465756 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.467610 kubelet[3368]: E0517 00:25:49.465778 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.468728 kubelet[3368]: E0517 00:25:49.468701 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.468728 kubelet[3368]: W0517 00:25:49.468727 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.468904 kubelet[3368]: E0517 00:25:49.468752 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.469051 kubelet[3368]: E0517 00:25:49.469036 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.469115 kubelet[3368]: W0517 00:25:49.469051 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.469115 kubelet[3368]: E0517 00:25:49.469068 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.469312 kubelet[3368]: E0517 00:25:49.469288 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.469366 kubelet[3368]: W0517 00:25:49.469319 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.469366 kubelet[3368]: E0517 00:25:49.469332 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.469773 kubelet[3368]: E0517 00:25:49.469755 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.469773 kubelet[3368]: W0517 00:25:49.469772 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.469895 kubelet[3368]: E0517 00:25:49.469786 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.470087 kubelet[3368]: E0517 00:25:49.470072 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.470149 kubelet[3368]: W0517 00:25:49.470088 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.470149 kubelet[3368]: E0517 00:25:49.470102 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.470602 kubelet[3368]: E0517 00:25:49.470310 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.470602 kubelet[3368]: W0517 00:25:49.470321 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.470602 kubelet[3368]: E0517 00:25:49.470332 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.470766 kubelet[3368]: E0517 00:25:49.470607 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.470766 kubelet[3368]: W0517 00:25:49.470618 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.470766 kubelet[3368]: E0517 00:25:49.470630 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.476617 kubelet[3368]: E0517 00:25:49.474124 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.476617 kubelet[3368]: W0517 00:25:49.474146 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.476617 kubelet[3368]: E0517 00:25:49.474166 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.476617 kubelet[3368]: E0517 00:25:49.475856 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.476617 kubelet[3368]: W0517 00:25:49.475871 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.476617 kubelet[3368]: E0517 00:25:49.475892 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.476617 kubelet[3368]: E0517 00:25:49.476297 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.476617 kubelet[3368]: W0517 00:25:49.476309 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.479763 kubelet[3368]: E0517 00:25:49.479734 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.484983 kubelet[3368]: E0517 00:25:49.484395 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.484983 kubelet[3368]: W0517 00:25:49.484419 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.484983 kubelet[3368]: E0517 00:25:49.484444 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.484983 kubelet[3368]: E0517 00:25:49.484844 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.484983 kubelet[3368]: W0517 00:25:49.484858 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.484983 kubelet[3368]: E0517 00:25:49.484874 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.495746 kubelet[3368]: E0517 00:25:49.495709 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.495746 kubelet[3368]: W0517 00:25:49.495743 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.496196 kubelet[3368]: E0517 00:25:49.496173 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.497982 kubelet[3368]: E0517 00:25:49.497836 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.497982 kubelet[3368]: W0517 00:25:49.497857 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.500611 kubelet[3368]: E0517 00:25:49.500407 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.502746 kubelet[3368]: E0517 00:25:49.501940 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.502746 kubelet[3368]: W0517 00:25:49.501961 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.503062 kubelet[3368]: E0517 00:25:49.502942 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.504715 kubelet[3368]: E0517 00:25:49.504674 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.504715 kubelet[3368]: W0517 00:25:49.504692 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.505438 kubelet[3368]: E0517 00:25:49.504908 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.506374 kubelet[3368]: E0517 00:25:49.506358 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.506722 kubelet[3368]: W0517 00:25:49.506705 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.507294 kubelet[3368]: E0517 00:25:49.507113 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.508688 kubelet[3368]: E0517 00:25:49.508604 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.508688 kubelet[3368]: W0517 00:25:49.508620 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.509804 kubelet[3368]: E0517 00:25:49.508825 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.514435 kubelet[3368]: E0517 00:25:49.514408 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.515091 kubelet[3368]: W0517 00:25:49.514607 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.516261 kubelet[3368]: E0517 00:25:49.515264 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.516567 kubelet[3368]: E0517 00:25:49.516554 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.517726 kubelet[3368]: W0517 00:25:49.517630 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.517726 kubelet[3368]: E0517 00:25:49.517691 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.519763 kubelet[3368]: E0517 00:25:49.519659 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.519763 kubelet[3368]: W0517 00:25:49.519677 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.519763 kubelet[3368]: E0517 00:25:49.519715 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.521603 kubelet[3368]: E0517 00:25:49.520215 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.521603 kubelet[3368]: W0517 00:25:49.520229 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.521603 kubelet[3368]: E0517 00:25:49.520313 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.523702 kubelet[3368]: E0517 00:25:49.523685 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.523798 kubelet[3368]: W0517 00:25:49.523785 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.523904 kubelet[3368]: E0517 00:25:49.523890 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.524425 kubelet[3368]: E0517 00:25:49.524413 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.524514 kubelet[3368]: W0517 00:25:49.524504 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.526337 kubelet[3368]: E0517 00:25:49.526320 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:49.526768 kubelet[3368]: E0517 00:25:49.526706 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:49.526768 kubelet[3368]: W0517 00:25:49.526721 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:49.526768 kubelet[3368]: E0517 00:25:49.526736 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.180605 kubelet[3368]: E0517 00:25:50.179228 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhsvr" podUID="9548793e-04a2-4303-8663-86deb887e61f" May 17 00:25:50.287077 containerd[2119]: time="2025-05-17T00:25:50.287026528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:50.288080 containerd[2119]: time="2025-05-17T00:25:50.287930093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:25:50.289620 containerd[2119]: time="2025-05-17T00:25:50.289032758Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:50.291711 containerd[2119]: time="2025-05-17T00:25:50.291659891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:50.292289 containerd[2119]: time="2025-05-17T00:25:50.292151676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.342755521s" May 17 00:25:50.292289 containerd[2119]: time="2025-05-17T00:25:50.292180913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:25:50.294502 containerd[2119]: time="2025-05-17T00:25:50.294380484Z" level=info msg="CreateContainer within sandbox \"652cbebf4980dddf48605697cdd1fe13b962a599e9f333b65ee1d717a583ddac\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:25:50.343791 containerd[2119]: time="2025-05-17T00:25:50.343747328Z" level=info msg="CreateContainer within sandbox \"652cbebf4980dddf48605697cdd1fe13b962a599e9f333b65ee1d717a583ddac\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c1e6e8658627bbd94593f56c02dcb690d830acd6bc4ae4e7b9d6c6fc9f01ff47\"" May 17 00:25:50.344700 containerd[2119]: time="2025-05-17T00:25:50.344664536Z" level=info msg="StartContainer for \"c1e6e8658627bbd94593f56c02dcb690d830acd6bc4ae4e7b9d6c6fc9f01ff47\"" May 17 00:25:50.377222 kubelet[3368]: I0517 00:25:50.377186 3368 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:25:50.380703 kubelet[3368]: E0517 00:25:50.380650 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.380703 kubelet[3368]: W0517 00:25:50.380703 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.380895 kubelet[3368]: E0517 00:25:50.380729 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.385682 kubelet[3368]: E0517 00:25:50.385392 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.385682 kubelet[3368]: W0517 00:25:50.385418 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.385682 kubelet[3368]: E0517 00:25:50.385443 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.385932 kubelet[3368]: E0517 00:25:50.385897 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.385932 kubelet[3368]: W0517 00:25:50.385913 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.386022 kubelet[3368]: E0517 00:25:50.385932 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.386813 kubelet[3368]: E0517 00:25:50.386363 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.386813 kubelet[3368]: W0517 00:25:50.386377 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.386813 kubelet[3368]: E0517 00:25:50.386392 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.387195 kubelet[3368]: E0517 00:25:50.387140 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.387195 kubelet[3368]: W0517 00:25:50.387158 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.387195 kubelet[3368]: E0517 00:25:50.387172 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.387847 kubelet[3368]: E0517 00:25:50.387619 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.387847 kubelet[3368]: W0517 00:25:50.387633 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.387847 kubelet[3368]: E0517 00:25:50.387647 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.388037 kubelet[3368]: E0517 00:25:50.387870 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.388037 kubelet[3368]: W0517 00:25:50.387880 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.388037 kubelet[3368]: E0517 00:25:50.387893 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.388411 kubelet[3368]: E0517 00:25:50.388102 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.388411 kubelet[3368]: W0517 00:25:50.388112 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.388411 kubelet[3368]: E0517 00:25:50.388123 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.388411 kubelet[3368]: E0517 00:25:50.388393 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.388411 kubelet[3368]: W0517 00:25:50.388406 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.388960 kubelet[3368]: E0517 00:25:50.388419 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.388960 kubelet[3368]: E0517 00:25:50.388635 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.388960 kubelet[3368]: W0517 00:25:50.388644 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.388960 kubelet[3368]: E0517 00:25:50.388662 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.388960 kubelet[3368]: E0517 00:25:50.388925 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.388960 kubelet[3368]: W0517 00:25:50.388937 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.388960 kubelet[3368]: E0517 00:25:50.388950 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.389599 kubelet[3368]: E0517 00:25:50.389180 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.389599 kubelet[3368]: W0517 00:25:50.389192 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.389599 kubelet[3368]: E0517 00:25:50.389206 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.389599 kubelet[3368]: E0517 00:25:50.389513 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.389599 kubelet[3368]: W0517 00:25:50.389524 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.389599 kubelet[3368]: E0517 00:25:50.389537 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.390037 kubelet[3368]: E0517 00:25:50.389888 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.390037 kubelet[3368]: W0517 00:25:50.389900 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.390037 kubelet[3368]: E0517 00:25:50.389913 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.390562 kubelet[3368]: E0517 00:25:50.390222 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.390562 kubelet[3368]: W0517 00:25:50.390234 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.390562 kubelet[3368]: E0517 00:25:50.390249 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.391050 kubelet[3368]: E0517 00:25:50.390774 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.391050 kubelet[3368]: W0517 00:25:50.390786 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.391050 kubelet[3368]: E0517 00:25:50.390807 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.391423 kubelet[3368]: E0517 00:25:50.391409 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.391597 kubelet[3368]: W0517 00:25:50.391505 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.391597 kubelet[3368]: E0517 00:25:50.391542 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.394159 kubelet[3368]: E0517 00:25:50.392774 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.394159 kubelet[3368]: W0517 00:25:50.392789 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.394159 kubelet[3368]: E0517 00:25:50.392808 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.394159 kubelet[3368]: E0517 00:25:50.393064 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.394159 kubelet[3368]: W0517 00:25:50.393075 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.394159 kubelet[3368]: E0517 00:25:50.393325 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.394650 kubelet[3368]: E0517 00:25:50.394630 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.394650 kubelet[3368]: W0517 00:25:50.394649 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.394766 kubelet[3368]: E0517 00:25:50.394710 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.395661 kubelet[3368]: E0517 00:25:50.395632 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.395661 kubelet[3368]: W0517 00:25:50.395650 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.395999 kubelet[3368]: E0517 00:25:50.395745 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.396684 kubelet[3368]: E0517 00:25:50.396667 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.396684 kubelet[3368]: W0517 00:25:50.396682 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.396851 kubelet[3368]: E0517 00:25:50.396802 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.397331 kubelet[3368]: E0517 00:25:50.397295 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.397400 kubelet[3368]: W0517 00:25:50.397332 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.399098 kubelet[3368]: E0517 00:25:50.397730 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.399098 kubelet[3368]: E0517 00:25:50.398740 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.399098 kubelet[3368]: W0517 00:25:50.398751 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.399098 kubelet[3368]: E0517 00:25:50.398855 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.401355 kubelet[3368]: E0517 00:25:50.401332 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.401355 kubelet[3368]: W0517 00:25:50.401354 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.401483 kubelet[3368]: E0517 00:25:50.401446 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.402201 kubelet[3368]: E0517 00:25:50.402086 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.402201 kubelet[3368]: W0517 00:25:50.402130 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.402447 kubelet[3368]: E0517 00:25:50.402294 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.402447 kubelet[3368]: E0517 00:25:50.402399 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.402447 kubelet[3368]: W0517 00:25:50.402409 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.402731 kubelet[3368]: E0517 00:25:50.402491 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.405860 kubelet[3368]: E0517 00:25:50.405691 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.405860 kubelet[3368]: W0517 00:25:50.405715 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.405860 kubelet[3368]: E0517 00:25:50.405737 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.407911 systemd[1]: run-containerd-runc-k8s.io-c1e6e8658627bbd94593f56c02dcb690d830acd6bc4ae4e7b9d6c6fc9f01ff47-runc.oYw2m7.mount: Deactivated successfully. May 17 00:25:50.408376 kubelet[3368]: E0517 00:25:50.408115 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.408376 kubelet[3368]: W0517 00:25:50.408132 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.408376 kubelet[3368]: E0517 00:25:50.408156 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.411046 kubelet[3368]: E0517 00:25:50.409838 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.411046 kubelet[3368]: W0517 00:25:50.409856 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.411046 kubelet[3368]: E0517 00:25:50.409879 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.411046 kubelet[3368]: E0517 00:25:50.410667 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.411046 kubelet[3368]: W0517 00:25:50.410682 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.411046 kubelet[3368]: E0517 00:25:50.410704 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.411536 kubelet[3368]: E0517 00:25:50.411519 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.411536 kubelet[3368]: W0517 00:25:50.411536 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.411717 kubelet[3368]: E0517 00:25:50.411552 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.417316 kubelet[3368]: E0517 00:25:50.416678 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:25:50.417316 kubelet[3368]: W0517 00:25:50.416702 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:25:50.417316 kubelet[3368]: E0517 00:25:50.416725 3368 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:25:50.448794 containerd[2119]: time="2025-05-17T00:25:50.448549762Z" level=info msg="StartContainer for \"c1e6e8658627bbd94593f56c02dcb690d830acd6bc4ae4e7b9d6c6fc9f01ff47\" returns successfully" May 17 00:25:50.503644 containerd[2119]: time="2025-05-17T00:25:50.501395753Z" level=info msg="shim disconnected" id=c1e6e8658627bbd94593f56c02dcb690d830acd6bc4ae4e7b9d6c6fc9f01ff47 namespace=k8s.io May 17 00:25:50.503644 containerd[2119]: time="2025-05-17T00:25:50.503098050Z" level=warning msg="cleaning up after shim disconnected" id=c1e6e8658627bbd94593f56c02dcb690d830acd6bc4ae4e7b9d6c6fc9f01ff47 namespace=k8s.io May 17 00:25:50.503644 containerd[2119]: time="2025-05-17T00:25:50.503126037Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:25:50.957831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1e6e8658627bbd94593f56c02dcb690d830acd6bc4ae4e7b9d6c6fc9f01ff47-rootfs.mount: Deactivated successfully. May 17 00:25:51.382614 containerd[2119]: time="2025-05-17T00:25:51.381324571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:25:51.397813 kubelet[3368]: I0517 00:25:51.397436 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d5fd567fb-29pn7" podStartSLOduration=3.81194611 podStartE2EDuration="6.397417637s" podCreationTimestamp="2025-05-17 00:25:45 +0000 UTC" firstStartedPulling="2025-05-17 00:25:46.361012809 +0000 UTC m=+21.342739537" lastFinishedPulling="2025-05-17 00:25:48.946484342 +0000 UTC m=+23.928211064" observedRunningTime="2025-05-17 00:25:49.422306143 +0000 UTC m=+24.404032879" watchObservedRunningTime="2025-05-17 00:25:51.397417637 +0000 UTC m=+26.379144354" May 17 00:25:52.179390 kubelet[3368]: E0517 00:25:52.179247 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhsvr" podUID="9548793e-04a2-4303-8663-86deb887e61f" May 17 00:25:54.179596 kubelet[3368]: E0517 00:25:54.179529 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhsvr" podUID="9548793e-04a2-4303-8663-86deb887e61f" May 17 00:25:56.179817 kubelet[3368]: E0517 00:25:56.179764 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhsvr" podUID="9548793e-04a2-4303-8663-86deb887e61f" May 17 00:25:58.179074 kubelet[3368]: E0517 00:25:58.179022 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhsvr" podUID="9548793e-04a2-4303-8663-86deb887e61f" May 17 00:25:58.584178 containerd[2119]: time="2025-05-17T00:25:58.584115058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:58.586682 containerd[2119]: time="2025-05-17T00:25:58.586567393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:25:58.590493 containerd[2119]: time="2025-05-17T00:25:58.590301444Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:58.596040 containerd[2119]: time="2025-05-17T00:25:58.596000137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:58.597135 containerd[2119]: time="2025-05-17T00:25:58.596522567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 7.21511316s" May 17 00:25:58.597135 containerd[2119]: time="2025-05-17T00:25:58.596554706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:25:58.600284 containerd[2119]: time="2025-05-17T00:25:58.600251911Z" level=info msg="CreateContainer within sandbox \"652cbebf4980dddf48605697cdd1fe13b962a599e9f333b65ee1d717a583ddac\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:25:58.641902 containerd[2119]: time="2025-05-17T00:25:58.641846390Z" level=info msg="CreateContainer within sandbox \"652cbebf4980dddf48605697cdd1fe13b962a599e9f333b65ee1d717a583ddac\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5479ba3add0b51e8962ab17f77a06bb69c8ebd52cad039bbc5ca2fb6c806eea0\"" May 17 00:25:58.642752 containerd[2119]: time="2025-05-17T00:25:58.642677420Z" level=info msg="StartContainer for \"5479ba3add0b51e8962ab17f77a06bb69c8ebd52cad039bbc5ca2fb6c806eea0\"" May 17 00:25:58.717316 containerd[2119]: time="2025-05-17T00:25:58.716863400Z" level=info msg="StartContainer for \"5479ba3add0b51e8962ab17f77a06bb69c8ebd52cad039bbc5ca2fb6c806eea0\" returns successfully" May 17 00:25:59.794319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5479ba3add0b51e8962ab17f77a06bb69c8ebd52cad039bbc5ca2fb6c806eea0-rootfs.mount: Deactivated successfully. May 17 00:25:59.798718 containerd[2119]: time="2025-05-17T00:25:59.796067551Z" level=info msg="shim disconnected" id=5479ba3add0b51e8962ab17f77a06bb69c8ebd52cad039bbc5ca2fb6c806eea0 namespace=k8s.io May 17 00:25:59.798718 containerd[2119]: time="2025-05-17T00:25:59.796124260Z" level=warning msg="cleaning up after shim disconnected" id=5479ba3add0b51e8962ab17f77a06bb69c8ebd52cad039bbc5ca2fb6c806eea0 namespace=k8s.io May 17 00:25:59.798718 containerd[2119]: time="2025-05-17T00:25:59.796133135Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:25:59.823517 kubelet[3368]: I0517 00:25:59.823491 3368 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:26:00.061304 kubelet[3368]: I0517 00:26:00.061180 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4fe4f39-7918-4903-9c97-2e02a23b49cc-config-volume\") pod \"coredns-7c65d6cfc9-m5q6w\" (UID: \"c4fe4f39-7918-4903-9c97-2e02a23b49cc\") " pod="kube-system/coredns-7c65d6cfc9-m5q6w" May 17 00:26:00.069893 kubelet[3368]: I0517 00:26:00.061953 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c7e5c708-6f1e-4a6a-8224-1c84baaaea1e-calico-apiserver-certs\") pod \"calico-apiserver-58fb97568c-2dtmf\" (UID: \"c7e5c708-6f1e-4a6a-8224-1c84baaaea1e\") " pod="calico-apiserver/calico-apiserver-58fb97568c-2dtmf" May 17 00:26:00.069893 kubelet[3368]: I0517 00:26:00.062125 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8bnn\" (UniqueName: \"kubernetes.io/projected/739e36e0-8a50-4381-84dc-d3473d61c58e-kube-api-access-c8bnn\") pod \"calico-kube-controllers-db6d855c8-9lxqb\" (UID: \"739e36e0-8a50-4381-84dc-d3473d61c58e\") " pod="calico-system/calico-kube-controllers-db6d855c8-9lxqb" May 17 00:26:00.069893 kubelet[3368]: I0517 00:26:00.062165 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xb8w\" (UniqueName: \"kubernetes.io/projected/c4fe4f39-7918-4903-9c97-2e02a23b49cc-kube-api-access-9xb8w\") pod \"coredns-7c65d6cfc9-m5q6w\" (UID: \"c4fe4f39-7918-4903-9c97-2e02a23b49cc\") " pod="kube-system/coredns-7c65d6cfc9-m5q6w" May 17 00:26:00.069893 kubelet[3368]: I0517 00:26:00.062193 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/689d1667-b089-4fd0-8ef7-000242998aaf-config-volume\") pod \"coredns-7c65d6cfc9-ljd7m\" (UID: \"689d1667-b089-4fd0-8ef7-000242998aaf\") " pod="kube-system/coredns-7c65d6cfc9-ljd7m" May 17 00:26:00.069893 kubelet[3368]: I0517 00:26:00.062221 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9ntn\" (UniqueName: \"kubernetes.io/projected/f87fae28-48af-42f8-92bd-1ecd569fff56-kube-api-access-d9ntn\") pod \"goldmane-8f77d7b6c-fm6b4\" (UID: \"f87fae28-48af-42f8-92bd-1ecd569fff56\") " pod="calico-system/goldmane-8f77d7b6c-fm6b4" May 17 00:26:00.070239 kubelet[3368]: I0517 00:26:00.062245 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-884hd\" (UniqueName: \"kubernetes.io/projected/d1e790dd-3d78-4acb-9596-f004784bb2fd-kube-api-access-884hd\") pod \"whisker-7c9fd756b4-hwshk\" (UID: \"d1e790dd-3d78-4acb-9596-f004784bb2fd\") " pod="calico-system/whisker-7c9fd756b4-hwshk" May 17 00:26:00.070239 kubelet[3368]: I0517 00:26:00.062273 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jlwc\" (UniqueName: \"kubernetes.io/projected/e097cbd3-9914-4403-a492-af7b73e56564-kube-api-access-5jlwc\") pod \"calico-apiserver-58fb97568c-9q2hm\" (UID: \"e097cbd3-9914-4403-a492-af7b73e56564\") " pod="calico-apiserver/calico-apiserver-58fb97568c-9q2hm" May 17 00:26:00.070239 kubelet[3368]: I0517 00:26:00.062299 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f87fae28-48af-42f8-92bd-1ecd569fff56-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-fm6b4\" (UID: \"f87fae28-48af-42f8-92bd-1ecd569fff56\") " pod="calico-system/goldmane-8f77d7b6c-fm6b4" May 17 00:26:00.070239 kubelet[3368]: I0517 00:26:00.062326 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkq8f\" (UniqueName: \"kubernetes.io/projected/689d1667-b089-4fd0-8ef7-000242998aaf-kube-api-access-xkq8f\") pod \"coredns-7c65d6cfc9-ljd7m\" (UID: \"689d1667-b089-4fd0-8ef7-000242998aaf\") " pod="kube-system/coredns-7c65d6cfc9-ljd7m" May 17 00:26:00.070239 kubelet[3368]: I0517 00:26:00.062356 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e097cbd3-9914-4403-a492-af7b73e56564-calico-apiserver-certs\") pod \"calico-apiserver-58fb97568c-9q2hm\" (UID: \"e097cbd3-9914-4403-a492-af7b73e56564\") " pod="calico-apiserver/calico-apiserver-58fb97568c-9q2hm" May 17 00:26:00.070470 kubelet[3368]: I0517 00:26:00.062382 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d1e790dd-3d78-4acb-9596-f004784bb2fd-whisker-backend-key-pair\") pod \"whisker-7c9fd756b4-hwshk\" (UID: \"d1e790dd-3d78-4acb-9596-f004784bb2fd\") " pod="calico-system/whisker-7c9fd756b4-hwshk" May 17 00:26:00.071086 kubelet[3368]: I0517 00:26:00.070727 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f87fae28-48af-42f8-92bd-1ecd569fff56-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-fm6b4\" (UID: \"f87fae28-48af-42f8-92bd-1ecd569fff56\") " pod="calico-system/goldmane-8f77d7b6c-fm6b4" May 17 00:26:00.071086 kubelet[3368]: I0517 00:26:00.070851 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4stfv\" (UniqueName: \"kubernetes.io/projected/c7e5c708-6f1e-4a6a-8224-1c84baaaea1e-kube-api-access-4stfv\") pod \"calico-apiserver-58fb97568c-2dtmf\" (UID: \"c7e5c708-6f1e-4a6a-8224-1c84baaaea1e\") " pod="calico-apiserver/calico-apiserver-58fb97568c-2dtmf" May 17 00:26:00.071086 kubelet[3368]: I0517 00:26:00.070881 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87fae28-48af-42f8-92bd-1ecd569fff56-config\") pod \"goldmane-8f77d7b6c-fm6b4\" (UID: \"f87fae28-48af-42f8-92bd-1ecd569fff56\") " pod="calico-system/goldmane-8f77d7b6c-fm6b4" May 17 00:26:00.071086 kubelet[3368]: I0517 00:26:00.070905 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/739e36e0-8a50-4381-84dc-d3473d61c58e-tigera-ca-bundle\") pod \"calico-kube-controllers-db6d855c8-9lxqb\" (UID: \"739e36e0-8a50-4381-84dc-d3473d61c58e\") " pod="calico-system/calico-kube-controllers-db6d855c8-9lxqb" May 17 00:26:00.071086 kubelet[3368]: I0517 00:26:00.070927 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1e790dd-3d78-4acb-9596-f004784bb2fd-whisker-ca-bundle\") pod \"whisker-7c9fd756b4-hwshk\" (UID: \"d1e790dd-3d78-4acb-9596-f004784bb2fd\") " pod="calico-system/whisker-7c9fd756b4-hwshk" May 17 00:26:00.236657 containerd[2119]: time="2025-05-17T00:26:00.235958326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhsvr,Uid:9548793e-04a2-4303-8663-86deb887e61f,Namespace:calico-system,Attempt:0,}" May 17 00:26:00.425370 containerd[2119]: time="2025-05-17T00:26:00.425028438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:26:00.507799 containerd[2119]: time="2025-05-17T00:26:00.507755532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5q6w,Uid:c4fe4f39-7918-4903-9c97-2e02a23b49cc,Namespace:kube-system,Attempt:0,}" May 17 00:26:00.526557 containerd[2119]: time="2025-05-17T00:26:00.526277061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ljd7m,Uid:689d1667-b089-4fd0-8ef7-000242998aaf,Namespace:kube-system,Attempt:0,}" May 17 00:26:00.527283 containerd[2119]: time="2025-05-17T00:26:00.526495917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-db6d855c8-9lxqb,Uid:739e36e0-8a50-4381-84dc-d3473d61c58e,Namespace:calico-system,Attempt:0,}" May 17 00:26:00.527283 containerd[2119]: time="2025-05-17T00:26:00.526956381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58fb97568c-2dtmf,Uid:c7e5c708-6f1e-4a6a-8224-1c84baaaea1e,Namespace:calico-apiserver,Attempt:0,}" May 17 00:26:00.528837 containerd[2119]: time="2025-05-17T00:26:00.528818362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-fm6b4,Uid:f87fae28-48af-42f8-92bd-1ecd569fff56,Namespace:calico-system,Attempt:0,}" May 17 00:26:00.554312 containerd[2119]: time="2025-05-17T00:26:00.554279216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58fb97568c-9q2hm,Uid:e097cbd3-9914-4403-a492-af7b73e56564,Namespace:calico-apiserver,Attempt:0,}" May 17 00:26:00.554619 containerd[2119]: time="2025-05-17T00:26:00.554467610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c9fd756b4-hwshk,Uid:d1e790dd-3d78-4acb-9596-f004784bb2fd,Namespace:calico-system,Attempt:0,}" May 17 00:26:00.609790 containerd[2119]: time="2025-05-17T00:26:00.607889083Z" level=error msg="Failed to destroy network for sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.613245 containerd[2119]: time="2025-05-17T00:26:00.613160907Z" level=error msg="encountered an error cleaning up failed sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.633092 containerd[2119]: time="2025-05-17T00:26:00.632981573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhsvr,Uid:9548793e-04a2-4303-8663-86deb887e61f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.637464 kubelet[3368]: E0517 00:26:00.635809 3368 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.637464 kubelet[3368]: E0517 00:26:00.635897 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hhsvr" May 17 00:26:00.637464 kubelet[3368]: E0517 00:26:00.635927 3368 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hhsvr" May 17 00:26:00.637811 kubelet[3368]: E0517 00:26:00.635987 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hhsvr_calico-system(9548793e-04a2-4303-8663-86deb887e61f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hhsvr_calico-system(9548793e-04a2-4303-8663-86deb887e61f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hhsvr" podUID="9548793e-04a2-4303-8663-86deb887e61f" May 17 00:26:00.699556 containerd[2119]: time="2025-05-17T00:26:00.699393098Z" level=error msg="Failed to destroy network for sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.709393 containerd[2119]: time="2025-05-17T00:26:00.709339714Z" level=error msg="encountered an error cleaning up failed sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.709670 containerd[2119]: time="2025-05-17T00:26:00.709629265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5q6w,Uid:c4fe4f39-7918-4903-9c97-2e02a23b49cc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.711170 kubelet[3368]: E0517 00:26:00.709898 3368 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.711170 kubelet[3368]: E0517 00:26:00.709970 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m5q6w" May 17 00:26:00.711170 kubelet[3368]: E0517 00:26:00.709997 3368 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m5q6w" May 17 00:26:00.711382 kubelet[3368]: E0517 00:26:00.710059 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-m5q6w_kube-system(c4fe4f39-7918-4903-9c97-2e02a23b49cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-m5q6w_kube-system(c4fe4f39-7918-4903-9c97-2e02a23b49cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-m5q6w" podUID="c4fe4f39-7918-4903-9c97-2e02a23b49cc" May 17 00:26:00.909839 containerd[2119]: time="2025-05-17T00:26:00.909672876Z" level=error msg="Failed to destroy network for sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.913931 containerd[2119]: time="2025-05-17T00:26:00.910041149Z" level=error msg="encountered an error cleaning up failed sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.913931 containerd[2119]: time="2025-05-17T00:26:00.910103045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ljd7m,Uid:689d1667-b089-4fd0-8ef7-000242998aaf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.914133 kubelet[3368]: E0517 00:26:00.910342 3368 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.914133 kubelet[3368]: E0517 00:26:00.910403 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ljd7m" May 17 00:26:00.914133 kubelet[3368]: E0517 00:26:00.910430 3368 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ljd7m" May 17 00:26:00.914646 kubelet[3368]: E0517 00:26:00.910478 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ljd7m_kube-system(689d1667-b089-4fd0-8ef7-000242998aaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ljd7m_kube-system(689d1667-b089-4fd0-8ef7-000242998aaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ljd7m" podUID="689d1667-b089-4fd0-8ef7-000242998aaf" May 17 00:26:00.918488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5-shm.mount: Deactivated successfully. May 17 00:26:00.978227 containerd[2119]: time="2025-05-17T00:26:00.977870481Z" level=error msg="Failed to destroy network for sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.978851 containerd[2119]: time="2025-05-17T00:26:00.978607036Z" level=error msg="encountered an error cleaning up failed sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.978851 containerd[2119]: time="2025-05-17T00:26:00.978671020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-db6d855c8-9lxqb,Uid:739e36e0-8a50-4381-84dc-d3473d61c58e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.981680 kubelet[3368]: E0517 00:26:00.978871 3368 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:00.981680 kubelet[3368]: E0517 00:26:00.978949 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-db6d855c8-9lxqb" May 17 00:26:00.981680 kubelet[3368]: E0517 00:26:00.978976 3368 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-db6d855c8-9lxqb" May 17 00:26:00.981913 kubelet[3368]: E0517 00:26:00.979023 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-db6d855c8-9lxqb_calico-system(739e36e0-8a50-4381-84dc-d3473d61c58e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-db6d855c8-9lxqb_calico-system(739e36e0-8a50-4381-84dc-d3473d61c58e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-db6d855c8-9lxqb" podUID="739e36e0-8a50-4381-84dc-d3473d61c58e" May 17 00:26:00.988055 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8-shm.mount: Deactivated successfully. May 17 00:26:01.018681 containerd[2119]: time="2025-05-17T00:26:01.018619746Z" level=error msg="Failed to destroy network for sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.021403 containerd[2119]: time="2025-05-17T00:26:01.021348828Z" level=error msg="Failed to destroy network for sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.021920 containerd[2119]: time="2025-05-17T00:26:01.021874858Z" level=error msg="encountered an error cleaning up failed sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.022004 containerd[2119]: time="2025-05-17T00:26:01.021948872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58fb97568c-2dtmf,Uid:c7e5c708-6f1e-4a6a-8224-1c84baaaea1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.024532 kubelet[3368]: E0517 00:26:01.024481 3368 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.024689 kubelet[3368]: E0517 00:26:01.024559 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58fb97568c-2dtmf" May 17 00:26:01.024689 kubelet[3368]: E0517 00:26:01.024651 3368 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58fb97568c-2dtmf" May 17 00:26:01.025662 kubelet[3368]: E0517 00:26:01.024865 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58fb97568c-2dtmf_calico-apiserver(c7e5c708-6f1e-4a6a-8224-1c84baaaea1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58fb97568c-2dtmf_calico-apiserver(c7e5c708-6f1e-4a6a-8224-1c84baaaea1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58fb97568c-2dtmf" podUID="c7e5c708-6f1e-4a6a-8224-1c84baaaea1e" May 17 00:26:01.029525 containerd[2119]: time="2025-05-17T00:26:01.029289369Z" level=error msg="encountered an error cleaning up failed sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.030122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0-shm.mount: Deactivated successfully. May 17 00:26:01.032769 containerd[2119]: time="2025-05-17T00:26:01.031563462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58fb97568c-9q2hm,Uid:e097cbd3-9914-4403-a492-af7b73e56564,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.036437 kubelet[3368]: E0517 00:26:01.036392 3368 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.036624 kubelet[3368]: E0517 00:26:01.036462 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58fb97568c-9q2hm" May 17 00:26:01.036996 kubelet[3368]: E0517 00:26:01.036490 3368 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58fb97568c-9q2hm" May 17 00:26:01.038283 containerd[2119]: time="2025-05-17T00:26:01.038238485Z" level=error msg="Failed to destroy network for sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.041314 kubelet[3368]: E0517 00:26:01.038618 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58fb97568c-9q2hm_calico-apiserver(e097cbd3-9914-4403-a492-af7b73e56564)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58fb97568c-9q2hm_calico-apiserver(e097cbd3-9914-4403-a492-af7b73e56564)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58fb97568c-9q2hm" podUID="e097cbd3-9914-4403-a492-af7b73e56564" May 17 00:26:01.041497 containerd[2119]: time="2025-05-17T00:26:01.040658276Z" level=error msg="encountered an error cleaning up failed sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.041497 containerd[2119]: time="2025-05-17T00:26:01.040734782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-fm6b4,Uid:f87fae28-48af-42f8-92bd-1ecd569fff56,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.040286 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539-shm.mount: Deactivated successfully. May 17 00:26:01.042927 kubelet[3368]: E0517 00:26:01.042575 3368 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.043036 kubelet[3368]: E0517 00:26:01.042959 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-fm6b4" May 17 00:26:01.043036 kubelet[3368]: E0517 00:26:01.042987 3368 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-fm6b4" May 17 00:26:01.043136 kubelet[3368]: E0517 00:26:01.043046 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-fm6b4_calico-system(f87fae28-48af-42f8-92bd-1ecd569fff56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-fm6b4_calico-system(f87fae28-48af-42f8-92bd-1ecd569fff56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-fm6b4" podUID="f87fae28-48af-42f8-92bd-1ecd569fff56" May 17 00:26:01.055122 containerd[2119]: time="2025-05-17T00:26:01.055071239Z" level=error msg="Failed to destroy network for sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.055671 containerd[2119]: time="2025-05-17T00:26:01.055594358Z" level=error msg="encountered an error cleaning up failed sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.055822 containerd[2119]: time="2025-05-17T00:26:01.055670567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c9fd756b4-hwshk,Uid:d1e790dd-3d78-4acb-9596-f004784bb2fd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.056186 kubelet[3368]: E0517 00:26:01.055906 3368 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.056186 kubelet[3368]: E0517 00:26:01.055980 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c9fd756b4-hwshk" May 17 00:26:01.056186 kubelet[3368]: E0517 00:26:01.056006 3368 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c9fd756b4-hwshk" May 17 00:26:01.056372 kubelet[3368]: E0517 00:26:01.056057 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c9fd756b4-hwshk_calico-system(d1e790dd-3d78-4acb-9596-f004784bb2fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c9fd756b4-hwshk_calico-system(d1e790dd-3d78-4acb-9596-f004784bb2fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c9fd756b4-hwshk" podUID="d1e790dd-3d78-4acb-9596-f004784bb2fd" May 17 00:26:01.428058 kubelet[3368]: I0517 00:26:01.428023 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:01.431380 kubelet[3368]: I0517 00:26:01.431275 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:01.483495 kubelet[3368]: I0517 00:26:01.483461 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:01.488603 kubelet[3368]: I0517 00:26:01.488113 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:01.490170 kubelet[3368]: I0517 00:26:01.490141 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:01.492143 kubelet[3368]: I0517 00:26:01.492116 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:01.496196 kubelet[3368]: I0517 00:26:01.496166 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:01.499827 kubelet[3368]: I0517 00:26:01.499788 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:01.518703 containerd[2119]: time="2025-05-17T00:26:01.518336457Z" level=info msg="StopPodSandbox for \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\"" May 17 00:26:01.519674 containerd[2119]: time="2025-05-17T00:26:01.519570581Z" level=info msg="StopPodSandbox for \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\"" May 17 00:26:01.520696 containerd[2119]: time="2025-05-17T00:26:01.520526749Z" level=info msg="Ensure that sandbox 4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8 in task-service has been cleanup successfully" May 17 00:26:01.522243 containerd[2119]: time="2025-05-17T00:26:01.521322008Z" level=info msg="StopPodSandbox for \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\"" May 17 00:26:01.522243 containerd[2119]: time="2025-05-17T00:26:01.521514155Z" level=info msg="Ensure that sandbox 9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2 in task-service has been cleanup successfully" May 17 00:26:01.524190 containerd[2119]: time="2025-05-17T00:26:01.524150121Z" level=info msg="StopPodSandbox for \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\"" May 17 00:26:01.524417 containerd[2119]: time="2025-05-17T00:26:01.524388564Z" level=info msg="Ensure that sandbox 92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5 in task-service has been cleanup successfully" May 17 00:26:01.525946 containerd[2119]: time="2025-05-17T00:26:01.525653905Z" level=info msg="Ensure that sandbox 86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52 in task-service has been cleanup successfully" May 17 00:26:01.527041 containerd[2119]: time="2025-05-17T00:26:01.526995251Z" level=info msg="StopPodSandbox for \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\"" May 17 00:26:01.527534 containerd[2119]: time="2025-05-17T00:26:01.527509646Z" level=info msg="StopPodSandbox for \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\"" May 17 00:26:01.528615 containerd[2119]: time="2025-05-17T00:26:01.527881116Z" level=info msg="StopPodSandbox for \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\"" May 17 00:26:01.529454 containerd[2119]: time="2025-05-17T00:26:01.529409654Z" level=info msg="Ensure that sandbox 4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539 in task-service has been cleanup successfully" May 17 00:26:01.537411 containerd[2119]: time="2025-05-17T00:26:01.537305212Z" level=info msg="Ensure that sandbox b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74 in task-service has been cleanup successfully" May 17 00:26:01.538274 containerd[2119]: time="2025-05-17T00:26:01.527924556Z" level=info msg="StopPodSandbox for \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\"" May 17 00:26:01.538483 containerd[2119]: time="2025-05-17T00:26:01.538446514Z" level=info msg="Ensure that sandbox d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832 in task-service has been cleanup successfully" May 17 00:26:01.542080 containerd[2119]: time="2025-05-17T00:26:01.528082767Z" level=info msg="Ensure that sandbox b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0 in task-service has been cleanup successfully" May 17 00:26:01.647128 containerd[2119]: time="2025-05-17T00:26:01.647071605Z" level=error msg="StopPodSandbox for \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\" failed" error="failed to destroy network for sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.647855 kubelet[3368]: E0517 00:26:01.647690 3368 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:01.660458 kubelet[3368]: E0517 00:26:01.647774 3368 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2"} May 17 00:26:01.660458 kubelet[3368]: E0517 00:26:01.660362 3368 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4fe4f39-7918-4903-9c97-2e02a23b49cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:01.660458 kubelet[3368]: E0517 00:26:01.660405 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4fe4f39-7918-4903-9c97-2e02a23b49cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-m5q6w" podUID="c4fe4f39-7918-4903-9c97-2e02a23b49cc" May 17 00:26:01.692982 containerd[2119]: time="2025-05-17T00:26:01.692803218Z" level=error msg="StopPodSandbox for \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\" failed" error="failed to destroy network for sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.693489 kubelet[3368]: E0517 00:26:01.693313 3368 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:01.693489 kubelet[3368]: E0517 00:26:01.693364 3368 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74"} May 17 00:26:01.693489 kubelet[3368]: E0517 00:26:01.693416 3368 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f87fae28-48af-42f8-92bd-1ecd569fff56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:01.693489 kubelet[3368]: E0517 00:26:01.693447 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f87fae28-48af-42f8-92bd-1ecd569fff56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-fm6b4" podUID="f87fae28-48af-42f8-92bd-1ecd569fff56" May 17 00:26:01.724750 containerd[2119]: time="2025-05-17T00:26:01.724570015Z" level=error msg="StopPodSandbox for \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\" failed" error="failed to destroy network for sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.725466 kubelet[3368]: E0517 00:26:01.725064 3368 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:01.725466 kubelet[3368]: E0517 00:26:01.725122 3368 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5"} May 17 00:26:01.725466 kubelet[3368]: E0517 00:26:01.725169 3368 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"689d1667-b089-4fd0-8ef7-000242998aaf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:01.725466 kubelet[3368]: E0517 00:26:01.725205 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"689d1667-b089-4fd0-8ef7-000242998aaf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ljd7m" podUID="689d1667-b089-4fd0-8ef7-000242998aaf" May 17 00:26:01.729116 containerd[2119]: time="2025-05-17T00:26:01.729064921Z" level=error msg="StopPodSandbox for \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\" failed" error="failed to destroy network for sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.730084 kubelet[3368]: E0517 00:26:01.729900 3368 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:01.730084 kubelet[3368]: E0517 00:26:01.729966 3368 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539"} May 17 00:26:01.730084 kubelet[3368]: E0517 00:26:01.730011 3368 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7e5c708-6f1e-4a6a-8224-1c84baaaea1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:01.730084 kubelet[3368]: E0517 00:26:01.730047 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7e5c708-6f1e-4a6a-8224-1c84baaaea1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58fb97568c-2dtmf" podUID="c7e5c708-6f1e-4a6a-8224-1c84baaaea1e" May 17 00:26:01.732651 containerd[2119]: time="2025-05-17T00:26:01.732502254Z" level=error msg="StopPodSandbox for \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\" failed" error="failed to destroy network for sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.733161 kubelet[3368]: E0517 00:26:01.732914 3368 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:01.733161 kubelet[3368]: E0517 00:26:01.732968 3368 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832"} May 17 00:26:01.733161 kubelet[3368]: E0517 00:26:01.733085 3368 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1e790dd-3d78-4acb-9596-f004784bb2fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:01.733161 kubelet[3368]: E0517 00:26:01.733124 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1e790dd-3d78-4acb-9596-f004784bb2fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c9fd756b4-hwshk" podUID="d1e790dd-3d78-4acb-9596-f004784bb2fd" May 17 00:26:01.739542 containerd[2119]: time="2025-05-17T00:26:01.739488141Z" level=error msg="StopPodSandbox for \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\" failed" error="failed to destroy network for sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.741854 kubelet[3368]: E0517 00:26:01.740380 3368 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:01.741854 kubelet[3368]: E0517 00:26:01.740442 3368 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8"} May 17 00:26:01.741854 kubelet[3368]: E0517 00:26:01.740483 3368 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"739e36e0-8a50-4381-84dc-d3473d61c58e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:01.741854 kubelet[3368]: E0517 00:26:01.740513 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"739e36e0-8a50-4381-84dc-d3473d61c58e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-db6d855c8-9lxqb" podUID="739e36e0-8a50-4381-84dc-d3473d61c58e" May 17 00:26:01.744844 containerd[2119]: time="2025-05-17T00:26:01.744713413Z" level=error msg="StopPodSandbox for \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\" failed" error="failed to destroy network for sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.745312 kubelet[3368]: E0517 00:26:01.745265 3368 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:01.745485 kubelet[3368]: E0517 00:26:01.745463 3368 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52"} May 17 00:26:01.745618 kubelet[3368]: E0517 00:26:01.745600 3368 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9548793e-04a2-4303-8663-86deb887e61f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:01.745811 kubelet[3368]: E0517 00:26:01.745784 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9548793e-04a2-4303-8663-86deb887e61f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hhsvr" podUID="9548793e-04a2-4303-8663-86deb887e61f" May 17 00:26:01.753568 containerd[2119]: time="2025-05-17T00:26:01.753514030Z" level=error msg="StopPodSandbox for \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\" failed" error="failed to destroy network for sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:01.753969 kubelet[3368]: E0517 00:26:01.753922 3368 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:01.754161 kubelet[3368]: E0517 00:26:01.754131 3368 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0"} May 17 00:26:01.754272 kubelet[3368]: E0517 00:26:01.754257 3368 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e097cbd3-9914-4403-a492-af7b73e56564\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:01.754436 kubelet[3368]: E0517 00:26:01.754413 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e097cbd3-9914-4403-a492-af7b73e56564\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58fb97568c-9q2hm" podUID="e097cbd3-9914-4403-a492-af7b73e56564" May 17 00:26:01.793993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832-shm.mount: Deactivated successfully. May 17 00:26:01.794188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74-shm.mount: Deactivated successfully. May 17 00:26:02.981123 kubelet[3368]: I0517 00:26:02.980959 3368 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:26:05.628996 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:05.641709 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:05.629104 systemd-resolved[1975]: Flushed all caches. May 17 00:26:06.714875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3398901812.mount: Deactivated successfully. May 17 00:26:06.819238 containerd[2119]: time="2025-05-17T00:26:06.801899116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:26:06.822034 containerd[2119]: time="2025-05-17T00:26:06.821836310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.39675259s" May 17 00:26:06.822034 containerd[2119]: time="2025-05-17T00:26:06.821876406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:26:06.822034 containerd[2119]: time="2025-05-17T00:26:06.821922938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:06.860235 containerd[2119]: time="2025-05-17T00:26:06.860187920Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:06.861349 containerd[2119]: time="2025-05-17T00:26:06.861310541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:06.914329 containerd[2119]: time="2025-05-17T00:26:06.914270913Z" level=info msg="CreateContainer within sandbox \"652cbebf4980dddf48605697cdd1fe13b962a599e9f333b65ee1d717a583ddac\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:26:06.957530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1503610940.mount: Deactivated successfully. May 17 00:26:06.972292 containerd[2119]: time="2025-05-17T00:26:06.972128273Z" level=info msg="CreateContainer within sandbox \"652cbebf4980dddf48605697cdd1fe13b962a599e9f333b65ee1d717a583ddac\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eee30c2c60c12bdac4be702c411f1aa3b775fe4826dc72af95040f2dcb129ff6\"" May 17 00:26:06.974434 containerd[2119]: time="2025-05-17T00:26:06.973140242Z" level=info msg="StartContainer for \"eee30c2c60c12bdac4be702c411f1aa3b775fe4826dc72af95040f2dcb129ff6\"" May 17 00:26:07.109950 containerd[2119]: time="2025-05-17T00:26:07.109907594Z" level=info msg="StartContainer for \"eee30c2c60c12bdac4be702c411f1aa3b775fe4826dc72af95040f2dcb129ff6\" returns successfully" May 17 00:26:07.248375 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:26:07.249385 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:26:07.536616 containerd[2119]: time="2025-05-17T00:26:07.534565028Z" level=info msg="StopPodSandbox for \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\"" May 17 00:26:07.686054 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:07.690818 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:07.686086 systemd-resolved[1975]: Flushed all caches. May 17 00:26:07.754876 kubelet[3368]: I0517 00:26:07.704117 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-znq85" podStartSLOduration=1.40371275 podStartE2EDuration="21.649938681s" podCreationTimestamp="2025-05-17 00:25:46 +0000 UTC" firstStartedPulling="2025-05-17 00:25:46.614339727 +0000 UTC m=+21.596066456" lastFinishedPulling="2025-05-17 00:26:06.860565658 +0000 UTC m=+41.842292387" observedRunningTime="2025-05-17 00:26:07.649231481 +0000 UTC m=+42.630958221" watchObservedRunningTime="2025-05-17 00:26:07.649938681 +0000 UTC m=+42.631665420" May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:07.829 [INFO][4831] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:07.831 [INFO][4831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" iface="eth0" netns="/var/run/netns/cni-b1c8ee3d-adcc-bef2-40bd-be5cee72f051" May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:07.831 [INFO][4831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" iface="eth0" netns="/var/run/netns/cni-b1c8ee3d-adcc-bef2-40bd-be5cee72f051" May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:07.833 [INFO][4831] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" iface="eth0" netns="/var/run/netns/cni-b1c8ee3d-adcc-bef2-40bd-be5cee72f051" May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:07.833 [INFO][4831] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:07.833 [INFO][4831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:08.281 [INFO][4853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" HandleID="k8s-pod-network.d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" Workload="ip--172--31--31--125-k8s-whisker--7c9fd756b4--hwshk-eth0" May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:08.283 [INFO][4853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:08.284 [INFO][4853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:08.307 [WARNING][4853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" HandleID="k8s-pod-network.d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" Workload="ip--172--31--31--125-k8s-whisker--7c9fd756b4--hwshk-eth0" May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:08.307 [INFO][4853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" HandleID="k8s-pod-network.d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" Workload="ip--172--31--31--125-k8s-whisker--7c9fd756b4--hwshk-eth0" May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:08.309 [INFO][4853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:08.313137 containerd[2119]: 2025-05-17 00:26:08.311 [INFO][4831] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:08.313880 containerd[2119]: time="2025-05-17T00:26:08.313190556Z" level=info msg="TearDown network for sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\" successfully" May 17 00:26:08.313880 containerd[2119]: time="2025-05-17T00:26:08.313213216Z" level=info msg="StopPodSandbox for \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\" returns successfully" May 17 00:26:08.320330 systemd[1]: run-netns-cni\x2db1c8ee3d\x2dadcc\x2dbef2\x2d40bd\x2dbe5cee72f051.mount: Deactivated successfully. May 17 00:26:08.385855 kubelet[3368]: I0517 00:26:08.385812 3368 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1e790dd-3d78-4acb-9596-f004784bb2fd-whisker-ca-bundle\") pod \"d1e790dd-3d78-4acb-9596-f004784bb2fd\" (UID: \"d1e790dd-3d78-4acb-9596-f004784bb2fd\") " May 17 00:26:08.386012 kubelet[3368]: I0517 00:26:08.385872 3368 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d1e790dd-3d78-4acb-9596-f004784bb2fd-whisker-backend-key-pair\") pod \"d1e790dd-3d78-4acb-9596-f004784bb2fd\" (UID: \"d1e790dd-3d78-4acb-9596-f004784bb2fd\") " May 17 00:26:08.386012 kubelet[3368]: I0517 00:26:08.385902 3368 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-884hd\" (UniqueName: \"kubernetes.io/projected/d1e790dd-3d78-4acb-9596-f004784bb2fd-kube-api-access-884hd\") pod \"d1e790dd-3d78-4acb-9596-f004784bb2fd\" (UID: \"d1e790dd-3d78-4acb-9596-f004784bb2fd\") " May 17 00:26:08.395401 kubelet[3368]: I0517 00:26:08.393259 3368 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e790dd-3d78-4acb-9596-f004784bb2fd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d1e790dd-3d78-4acb-9596-f004784bb2fd" (UID: "d1e790dd-3d78-4acb-9596-f004784bb2fd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:26:08.401649 kubelet[3368]: I0517 00:26:08.399619 3368 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1e790dd-3d78-4acb-9596-f004784bb2fd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d1e790dd-3d78-4acb-9596-f004784bb2fd" (UID: "d1e790dd-3d78-4acb-9596-f004784bb2fd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:26:08.401649 kubelet[3368]: I0517 00:26:08.401336 3368 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e790dd-3d78-4acb-9596-f004784bb2fd-kube-api-access-884hd" (OuterVolumeSpecName: "kube-api-access-884hd") pod "d1e790dd-3d78-4acb-9596-f004784bb2fd" (UID: "d1e790dd-3d78-4acb-9596-f004784bb2fd"). InnerVolumeSpecName "kube-api-access-884hd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:26:08.404386 systemd[1]: var-lib-kubelet-pods-d1e790dd\x2d3d78\x2d4acb\x2d9596\x2df004784bb2fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d884hd.mount: Deactivated successfully. May 17 00:26:08.404555 systemd[1]: var-lib-kubelet-pods-d1e790dd\x2d3d78\x2d4acb\x2d9596\x2df004784bb2fd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:26:08.486916 kubelet[3368]: I0517 00:26:08.486866 3368 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-884hd\" (UniqueName: \"kubernetes.io/projected/d1e790dd-3d78-4acb-9596-f004784bb2fd-kube-api-access-884hd\") on node \"ip-172-31-31-125\" DevicePath \"\"" May 17 00:26:08.486916 kubelet[3368]: I0517 00:26:08.486905 3368 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1e790dd-3d78-4acb-9596-f004784bb2fd-whisker-ca-bundle\") on node \"ip-172-31-31-125\" DevicePath \"\"" May 17 00:26:08.486916 kubelet[3368]: I0517 00:26:08.486918 3368 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d1e790dd-3d78-4acb-9596-f004784bb2fd-whisker-backend-key-pair\") on node \"ip-172-31-31-125\" DevicePath \"\"" May 17 00:26:09.038901 kubelet[3368]: I0517 00:26:09.038682 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bff9171d-67d4-4c78-9fc8-257a4f17dd49-whisker-backend-key-pair\") pod \"whisker-6fc77d4b98-pwrxt\" (UID: \"bff9171d-67d4-4c78-9fc8-257a4f17dd49\") " pod="calico-system/whisker-6fc77d4b98-pwrxt" May 17 00:26:09.038901 kubelet[3368]: I0517 00:26:09.038803 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bff9171d-67d4-4c78-9fc8-257a4f17dd49-whisker-ca-bundle\") pod \"whisker-6fc77d4b98-pwrxt\" (UID: \"bff9171d-67d4-4c78-9fc8-257a4f17dd49\") " pod="calico-system/whisker-6fc77d4b98-pwrxt" May 17 00:26:09.038901 kubelet[3368]: I0517 00:26:09.038828 3368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4qzc\" (UniqueName: \"kubernetes.io/projected/bff9171d-67d4-4c78-9fc8-257a4f17dd49-kube-api-access-k4qzc\") pod \"whisker-6fc77d4b98-pwrxt\" (UID: \"bff9171d-67d4-4c78-9fc8-257a4f17dd49\") " pod="calico-system/whisker-6fc77d4b98-pwrxt" May 17 00:26:09.198829 kubelet[3368]: I0517 00:26:09.195790 3368 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e790dd-3d78-4acb-9596-f004784bb2fd" path="/var/lib/kubelet/pods/d1e790dd-3d78-4acb-9596-f004784bb2fd/volumes" May 17 00:26:09.304806 containerd[2119]: time="2025-05-17T00:26:09.304675794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fc77d4b98-pwrxt,Uid:bff9171d-67d4-4c78-9fc8-257a4f17dd49,Namespace:calico-system,Attempt:0,}" May 17 00:26:09.544138 systemd[1]: Started sshd@9-172.31.31.125:22-147.75.109.163:43152.service - OpenSSH per-connection server daemon (147.75.109.163:43152). May 17 00:26:09.725645 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:09.726688 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:09.725655 systemd-resolved[1975]: Flushed all caches. May 17 00:26:09.787960 (udev-worker)[4809]: Network interface NamePolicy= disabled on kernel command line. May 17 00:26:09.806135 systemd-networkd[1648]: cali6c00f04b9df: Link UP May 17 00:26:09.820794 systemd-networkd[1648]: cali6c00f04b9df: Gained carrier May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.431 [INFO][4989] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.471 [INFO][4989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0 whisker-6fc77d4b98- calico-system bff9171d-67d4-4c78-9fc8-257a4f17dd49 892 0 2025-05-17 00:26:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fc77d4b98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-31-125 whisker-6fc77d4b98-pwrxt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6c00f04b9df [] [] }} ContainerID="4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" Namespace="calico-system" Pod="whisker-6fc77d4b98-pwrxt" WorkloadEndpoint="ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.472 [INFO][4989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" Namespace="calico-system" Pod="whisker-6fc77d4b98-pwrxt" WorkloadEndpoint="ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.542 [INFO][5002] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" HandleID="k8s-pod-network.4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" Workload="ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.542 [INFO][5002] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" HandleID="k8s-pod-network.4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" Workload="ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d90d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-125", "pod":"whisker-6fc77d4b98-pwrxt", "timestamp":"2025-05-17 00:26:09.54200696 +0000 UTC"}, Hostname:"ip-172-31-31-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.542 [INFO][5002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.542 [INFO][5002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.542 [INFO][5002] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-125' May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.566 [INFO][5002] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" host="ip-172-31-31-125" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.597 [INFO][5002] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-125" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.655 [INFO][5002] ipam/ipam.go 511: Trying affinity for 192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.662 [INFO][5002] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.671 [INFO][5002] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.671 [INFO][5002] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" host="ip-172-31-31-125" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.681 [INFO][5002] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.725 [INFO][5002] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" host="ip-172-31-31-125" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.749 [INFO][5002] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.129/26] block=192.168.75.128/26 handle="k8s-pod-network.4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" host="ip-172-31-31-125" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.749 [INFO][5002] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.129/26] handle="k8s-pod-network.4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" host="ip-172-31-31-125" May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.749 [INFO][5002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:09.868181 containerd[2119]: 2025-05-17 00:26:09.749 [INFO][5002] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.129/26] IPv6=[] ContainerID="4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" HandleID="k8s-pod-network.4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" Workload="ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0" May 17 00:26:09.873764 containerd[2119]: 2025-05-17 00:26:09.759 [INFO][4989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" Namespace="calico-system" Pod="whisker-6fc77d4b98-pwrxt" WorkloadEndpoint="ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0", GenerateName:"whisker-6fc77d4b98-", Namespace:"calico-system", SelfLink:"", UID:"bff9171d-67d4-4c78-9fc8-257a4f17dd49", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fc77d4b98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"", Pod:"whisker-6fc77d4b98-pwrxt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6c00f04b9df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:09.873764 containerd[2119]: 2025-05-17 00:26:09.759 [INFO][4989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.129/32] ContainerID="4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" Namespace="calico-system" Pod="whisker-6fc77d4b98-pwrxt" WorkloadEndpoint="ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0" May 17 00:26:09.873764 containerd[2119]: 2025-05-17 00:26:09.759 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c00f04b9df ContainerID="4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" Namespace="calico-system" Pod="whisker-6fc77d4b98-pwrxt" WorkloadEndpoint="ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0" May 17 00:26:09.873764 containerd[2119]: 2025-05-17 00:26:09.819 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" Namespace="calico-system" Pod="whisker-6fc77d4b98-pwrxt" WorkloadEndpoint="ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0" May 17 00:26:09.873764 containerd[2119]: 2025-05-17 00:26:09.821 [INFO][4989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" Namespace="calico-system" Pod="whisker-6fc77d4b98-pwrxt" WorkloadEndpoint="ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0", GenerateName:"whisker-6fc77d4b98-", Namespace:"calico-system", SelfLink:"", UID:"bff9171d-67d4-4c78-9fc8-257a4f17dd49", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fc77d4b98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd", Pod:"whisker-6fc77d4b98-pwrxt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6c00f04b9df", MAC:"aa:4f:35:c7:22:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:09.873764 containerd[2119]: 2025-05-17 00:26:09.858 [INFO][4989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd" Namespace="calico-system" Pod="whisker-6fc77d4b98-pwrxt" WorkloadEndpoint="ip--172--31--31--125-k8s-whisker--6fc77d4b98--pwrxt-eth0" May 17 00:26:09.927775 sshd[5007]: Accepted publickey for core from 147.75.109.163 port 43152 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:09.935093 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:09.983318 containerd[2119]: time="2025-05-17T00:26:09.961659159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:09.983318 containerd[2119]: time="2025-05-17T00:26:09.980790182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:09.983318 containerd[2119]: time="2025-05-17T00:26:09.980837210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:09.983497 systemd-logind[2074]: New session 10 of user core. May 17 00:26:09.991568 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:26:10.013496 containerd[2119]: time="2025-05-17T00:26:10.011655010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:10.348545 containerd[2119]: time="2025-05-17T00:26:10.348480901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fc77d4b98-pwrxt,Uid:bff9171d-67d4-4c78-9fc8-257a4f17dd49,Namespace:calico-system,Attempt:0,} returns sandbox id \"4280d4be3d9bc06163509aa48116c11d03d53602f30bd30378b076205dbe1ddd\"" May 17 00:26:10.351211 containerd[2119]: time="2025-05-17T00:26:10.350895898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:26:10.517617 kernel: bpftool[5133]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:26:10.641358 containerd[2119]: time="2025-05-17T00:26:10.641109572Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:10.642346 containerd[2119]: time="2025-05-17T00:26:10.642212786Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:10.642346 containerd[2119]: time="2025-05-17T00:26:10.642298963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:26:10.645189 kubelet[3368]: E0517 00:26:10.644838 3368 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:10.649274 kubelet[3368]: E0517 00:26:10.648427 3368 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:10.707571 kubelet[3368]: E0517 00:26:10.707451 3368 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2817da406ebd46ae80a13be25f9034c9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4qzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc77d4b98-pwrxt_calico-system(bff9171d-67d4-4c78-9fc8-257a4f17dd49): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:10.710963 containerd[2119]: time="2025-05-17T00:26:10.710559930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:26:10.867042 sshd[5007]: pam_unix(sshd:session): session closed for user core May 17 00:26:10.877424 systemd[1]: sshd@9-172.31.31.125:22-147.75.109.163:43152.service: Deactivated successfully. May 17 00:26:10.882512 systemd-logind[2074]: Session 10 logged out. Waiting for processes to exit. May 17 00:26:10.885123 systemd-networkd[1648]: vxlan.calico: Link UP May 17 00:26:10.885134 systemd-networkd[1648]: vxlan.calico: Gained carrier May 17 00:26:10.888780 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:26:10.900745 systemd-logind[2074]: Removed session 10. May 17 00:26:10.925824 (udev-worker)[4810]: Network interface NamePolicy= disabled on kernel command line. May 17 00:26:10.930494 containerd[2119]: time="2025-05-17T00:26:10.929128767Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:10.930494 containerd[2119]: time="2025-05-17T00:26:10.930243375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:10.930494 containerd[2119]: time="2025-05-17T00:26:10.930296465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:26:10.931396 kubelet[3368]: E0517 00:26:10.931243 3368 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:10.932089 kubelet[3368]: E0517 00:26:10.931574 3368 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:10.932973 kubelet[3368]: E0517 00:26:10.932900 3368 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4qzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc77d4b98-pwrxt_calico-system(bff9171d-67d4-4c78-9fc8-257a4f17dd49): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:10.948463 kubelet[3368]: E0517 00:26:10.948397 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6fc77d4b98-pwrxt" podUID="bff9171d-67d4-4c78-9fc8-257a4f17dd49" May 17 00:26:11.517389 systemd-networkd[1648]: cali6c00f04b9df: Gained IPv6LL May 17 00:26:11.588891 kubelet[3368]: E0517 00:26:11.588814 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6fc77d4b98-pwrxt" podUID="bff9171d-67d4-4c78-9fc8-257a4f17dd49" May 17 00:26:11.774917 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:11.774646 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:11.774677 systemd-resolved[1975]: Flushed all caches. May 17 00:26:12.540777 systemd-networkd[1648]: vxlan.calico: Gained IPv6LL May 17 00:26:13.181404 containerd[2119]: time="2025-05-17T00:26:13.180732056Z" level=info msg="StopPodSandbox for \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\"" May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.235 [INFO][5224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.235 [INFO][5224] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" iface="eth0" netns="/var/run/netns/cni-87001e67-b39f-b88e-1dbe-b2fc6f60250a" May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.236 [INFO][5224] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" iface="eth0" netns="/var/run/netns/cni-87001e67-b39f-b88e-1dbe-b2fc6f60250a" May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.236 [INFO][5224] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" iface="eth0" netns="/var/run/netns/cni-87001e67-b39f-b88e-1dbe-b2fc6f60250a" May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.236 [INFO][5224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.236 [INFO][5224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.271 [INFO][5231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" HandleID="k8s-pod-network.92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.271 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.271 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.278 [WARNING][5231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" HandleID="k8s-pod-network.92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.278 [INFO][5231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" HandleID="k8s-pod-network.92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.280 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:13.286332 containerd[2119]: 2025-05-17 00:26:13.283 [INFO][5224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:13.289230 containerd[2119]: time="2025-05-17T00:26:13.286515787Z" level=info msg="TearDown network for sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\" successfully" May 17 00:26:13.289230 containerd[2119]: time="2025-05-17T00:26:13.286599501Z" level=info msg="StopPodSandbox for \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\" returns successfully" May 17 00:26:13.289230 containerd[2119]: time="2025-05-17T00:26:13.287356706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ljd7m,Uid:689d1667-b089-4fd0-8ef7-000242998aaf,Namespace:kube-system,Attempt:1,}" May 17 00:26:13.293129 systemd[1]: run-netns-cni\x2d87001e67\x2db39f\x2db88e\x2d1dbe\x2db2fc6f60250a.mount: Deactivated successfully. May 17 00:26:13.446010 (udev-worker)[5172]: Network interface NamePolicy= disabled on kernel command line. May 17 00:26:13.448192 systemd-networkd[1648]: cali80657e86d5c: Link UP May 17 00:26:13.448486 systemd-networkd[1648]: cali80657e86d5c: Gained carrier May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.363 [INFO][5238] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0 coredns-7c65d6cfc9- kube-system 689d1667-b089-4fd0-8ef7-000242998aaf 960 0 2025-05-17 00:25:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-125 coredns-7c65d6cfc9-ljd7m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali80657e86d5c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ljd7m" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.364 [INFO][5238] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ljd7m" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.397 [INFO][5249] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" HandleID="k8s-pod-network.d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.398 [INFO][5249] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" HandleID="k8s-pod-network.d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9630), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-125", "pod":"coredns-7c65d6cfc9-ljd7m", "timestamp":"2025-05-17 00:26:13.397940231 +0000 UTC"}, Hostname:"ip-172-31-31-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.399 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.399 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.399 [INFO][5249] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-125' May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.406 [INFO][5249] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" host="ip-172-31-31-125" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.412 [INFO][5249] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-125" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.417 [INFO][5249] ipam/ipam.go 511: Trying affinity for 192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.420 [INFO][5249] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.423 [INFO][5249] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.423 [INFO][5249] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" host="ip-172-31-31-125" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.425 [INFO][5249] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.430 [INFO][5249] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" host="ip-172-31-31-125" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.439 [INFO][5249] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.130/26] block=192.168.75.128/26 handle="k8s-pod-network.d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" host="ip-172-31-31-125" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.439 [INFO][5249] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.130/26] handle="k8s-pod-network.d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" host="ip-172-31-31-125" May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.439 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:13.468460 containerd[2119]: 2025-05-17 00:26:13.439 [INFO][5249] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.130/26] IPv6=[] ContainerID="d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" HandleID="k8s-pod-network.d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:13.469536 containerd[2119]: 2025-05-17 00:26:13.443 [INFO][5238] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ljd7m" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"689d1667-b089-4fd0-8ef7-000242998aaf", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"", Pod:"coredns-7c65d6cfc9-ljd7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80657e86d5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:13.469536 containerd[2119]: 2025-05-17 00:26:13.443 [INFO][5238] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.130/32] ContainerID="d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ljd7m" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:13.469536 containerd[2119]: 2025-05-17 00:26:13.443 [INFO][5238] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80657e86d5c ContainerID="d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ljd7m" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:13.469536 containerd[2119]: 2025-05-17 00:26:13.449 [INFO][5238] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ljd7m" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:13.469536 containerd[2119]: 2025-05-17 00:26:13.449 [INFO][5238] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ljd7m" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"689d1667-b089-4fd0-8ef7-000242998aaf", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c", Pod:"coredns-7c65d6cfc9-ljd7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80657e86d5c", MAC:"06:93:cf:d6:75:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:13.469536 containerd[2119]: 2025-05-17 00:26:13.464 [INFO][5238] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ljd7m" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:13.497507 containerd[2119]: time="2025-05-17T00:26:13.497313110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:13.498381 containerd[2119]: time="2025-05-17T00:26:13.497789841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:13.498381 containerd[2119]: time="2025-05-17T00:26:13.498092954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:13.498381 containerd[2119]: time="2025-05-17T00:26:13.498338508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:13.576784 containerd[2119]: time="2025-05-17T00:26:13.576740701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ljd7m,Uid:689d1667-b089-4fd0-8ef7-000242998aaf,Namespace:kube-system,Attempt:1,} returns sandbox id \"d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c\"" May 17 00:26:13.580346 containerd[2119]: time="2025-05-17T00:26:13.580301617Z" level=info msg="CreateContainer within sandbox \"d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:26:13.618864 containerd[2119]: time="2025-05-17T00:26:13.618809793Z" level=info msg="CreateContainer within sandbox \"d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"beb59585b7182eee5574654f1600ca18a87b943a841f0b8c50528432c119c1ac\"" May 17 00:26:13.619615 containerd[2119]: time="2025-05-17T00:26:13.619564844Z" level=info msg="StartContainer for \"beb59585b7182eee5574654f1600ca18a87b943a841f0b8c50528432c119c1ac\"" May 17 00:26:13.692818 containerd[2119]: time="2025-05-17T00:26:13.692766046Z" level=info msg="StartContainer for \"beb59585b7182eee5574654f1600ca18a87b943a841f0b8c50528432c119c1ac\" returns successfully" May 17 00:26:14.180284 containerd[2119]: time="2025-05-17T00:26:14.180247639Z" level=info msg="StopPodSandbox for \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\"" May 17 00:26:14.180865 containerd[2119]: time="2025-05-17T00:26:14.180619229Z" level=info msg="StopPodSandbox for \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\"" May 17 00:26:14.182424 containerd[2119]: time="2025-05-17T00:26:14.182400365Z" level=info msg="StopPodSandbox for \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\"" May 17 00:26:14.183827 containerd[2119]: time="2025-05-17T00:26:14.182701979Z" level=info msg="StopPodSandbox for \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\"" May 17 00:26:14.296194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3275869604.mount: Deactivated successfully. May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.325 [INFO][5375] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.325 [INFO][5375] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" iface="eth0" netns="/var/run/netns/cni-5c00d879-95aa-97e0-5389-e602f1e3a533" May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.327 [INFO][5375] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" iface="eth0" netns="/var/run/netns/cni-5c00d879-95aa-97e0-5389-e602f1e3a533" May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.330 [INFO][5375] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" iface="eth0" netns="/var/run/netns/cni-5c00d879-95aa-97e0-5389-e602f1e3a533" May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.330 [INFO][5375] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.330 [INFO][5375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.417 [INFO][5406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" HandleID="k8s-pod-network.86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.422 [INFO][5406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.423 [INFO][5406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.446 [WARNING][5406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" HandleID="k8s-pod-network.86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.447 [INFO][5406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" HandleID="k8s-pod-network.86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.456 [INFO][5406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:14.470112 containerd[2119]: 2025-05-17 00:26:14.467 [INFO][5375] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:14.474839 containerd[2119]: time="2025-05-17T00:26:14.474706964Z" level=info msg="TearDown network for sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\" successfully" May 17 00:26:14.474839 containerd[2119]: time="2025-05-17T00:26:14.474748733Z" level=info msg="StopPodSandbox for \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\" returns successfully" May 17 00:26:14.480206 systemd[1]: run-netns-cni\x2d5c00d879\x2d95aa\x2d97e0\x2d5389\x2de602f1e3a533.mount: Deactivated successfully. May 17 00:26:14.481478 containerd[2119]: time="2025-05-17T00:26:14.480400477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhsvr,Uid:9548793e-04a2-4303-8663-86deb887e61f,Namespace:calico-system,Attempt:1,}" May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.318 [INFO][5367] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.323 [INFO][5367] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" iface="eth0" netns="/var/run/netns/cni-58a41697-bfcc-fb47-3de0-e1436e87ad10" May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.323 [INFO][5367] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" iface="eth0" netns="/var/run/netns/cni-58a41697-bfcc-fb47-3de0-e1436e87ad10" May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.324 [INFO][5367] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" iface="eth0" netns="/var/run/netns/cni-58a41697-bfcc-fb47-3de0-e1436e87ad10" May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.324 [INFO][5367] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.324 [INFO][5367] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.429 [INFO][5401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" HandleID="k8s-pod-network.b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.429 [INFO][5401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.456 [INFO][5401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.479 [WARNING][5401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" HandleID="k8s-pod-network.b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.479 [INFO][5401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" HandleID="k8s-pod-network.b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.489 [INFO][5401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:14.500518 containerd[2119]: 2025-05-17 00:26:14.497 [INFO][5367] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:14.505088 containerd[2119]: time="2025-05-17T00:26:14.501242445Z" level=info msg="TearDown network for sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\" successfully" May 17 00:26:14.505088 containerd[2119]: time="2025-05-17T00:26:14.501276172Z" level=info msg="StopPodSandbox for \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\" returns successfully" May 17 00:26:14.507508 containerd[2119]: time="2025-05-17T00:26:14.507372111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-fm6b4,Uid:f87fae28-48af-42f8-92bd-1ecd569fff56,Namespace:calico-system,Attempt:1,}" May 17 00:26:14.510424 systemd[1]: run-netns-cni\x2d58a41697\x2dbfcc\x2dfb47\x2d3de0\x2de1436e87ad10.mount: Deactivated successfully. May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.347 [INFO][5380] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.349 [INFO][5380] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" iface="eth0" netns="/var/run/netns/cni-579c2941-8eb0-d454-8230-d65a2d251ba6" May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.351 [INFO][5380] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" iface="eth0" netns="/var/run/netns/cni-579c2941-8eb0-d454-8230-d65a2d251ba6" May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.352 [INFO][5380] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" iface="eth0" netns="/var/run/netns/cni-579c2941-8eb0-d454-8230-d65a2d251ba6" May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.352 [INFO][5380] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.352 [INFO][5380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.462 [INFO][5412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" HandleID="k8s-pod-network.b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.462 [INFO][5412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.488 [INFO][5412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.501 [WARNING][5412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" HandleID="k8s-pod-network.b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.504 [INFO][5412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" HandleID="k8s-pod-network.b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.517 [INFO][5412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:14.551854 containerd[2119]: 2025-05-17 00:26:14.536 [INFO][5380] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:14.554095 containerd[2119]: time="2025-05-17T00:26:14.552638672Z" level=info msg="TearDown network for sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\" successfully" May 17 00:26:14.554095 containerd[2119]: time="2025-05-17T00:26:14.552674103Z" level=info msg="StopPodSandbox for \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\" returns successfully" May 17 00:26:14.554492 containerd[2119]: time="2025-05-17T00:26:14.554460275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58fb97568c-9q2hm,Uid:e097cbd3-9914-4403-a492-af7b73e56564,Namespace:calico-apiserver,Attempt:1,}" May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.364 [INFO][5379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.365 [INFO][5379] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" iface="eth0" netns="/var/run/netns/cni-588a6155-592d-4387-7b6d-c8de2928eefa" May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.365 [INFO][5379] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" iface="eth0" netns="/var/run/netns/cni-588a6155-592d-4387-7b6d-c8de2928eefa" May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.367 [INFO][5379] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" iface="eth0" netns="/var/run/netns/cni-588a6155-592d-4387-7b6d-c8de2928eefa" May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.367 [INFO][5379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.367 [INFO][5379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.492 [INFO][5414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" HandleID="k8s-pod-network.9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.494 [INFO][5414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.518 [INFO][5414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.539 [WARNING][5414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" HandleID="k8s-pod-network.9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.539 [INFO][5414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" HandleID="k8s-pod-network.9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.542 [INFO][5414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:14.562645 containerd[2119]: 2025-05-17 00:26:14.555 [INFO][5379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:14.564136 containerd[2119]: time="2025-05-17T00:26:14.564005328Z" level=info msg="TearDown network for sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\" successfully" May 17 00:26:14.564136 containerd[2119]: time="2025-05-17T00:26:14.564038563Z" level=info msg="StopPodSandbox for \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\" returns successfully" May 17 00:26:14.566397 containerd[2119]: time="2025-05-17T00:26:14.565940856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5q6w,Uid:c4fe4f39-7918-4903-9c97-2e02a23b49cc,Namespace:kube-system,Attempt:1,}" May 17 00:26:14.793179 kubelet[3368]: I0517 00:26:14.791085 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ljd7m" podStartSLOduration=44.791030576 podStartE2EDuration="44.791030576s" podCreationTimestamp="2025-05-17 00:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:26:14.790218049 +0000 UTC m=+49.771944788" watchObservedRunningTime="2025-05-17 00:26:14.791030576 +0000 UTC m=+49.772757316" May 17 00:26:14.846222 systemd-networkd[1648]: cali80657e86d5c: Gained IPv6LL May 17 00:26:14.945519 systemd-networkd[1648]: cali5776d90a4bd: Link UP May 17 00:26:14.959245 systemd-networkd[1648]: cali5776d90a4bd: Gained carrier May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.674 [INFO][5433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0 csi-node-driver- calico-system 9548793e-04a2-4303-8663-86deb887e61f 979 0 2025-05-17 00:25:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-125 csi-node-driver-hhsvr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5776d90a4bd [] [] }} ContainerID="42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" Namespace="calico-system" Pod="csi-node-driver-hhsvr" WorkloadEndpoint="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.674 [INFO][5433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" Namespace="calico-system" Pod="csi-node-driver-hhsvr" WorkloadEndpoint="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.781 [INFO][5460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" HandleID="k8s-pod-network.42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.782 [INFO][5460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" HandleID="k8s-pod-network.42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-125", "pod":"csi-node-driver-hhsvr", "timestamp":"2025-05-17 00:26:14.781004068 +0000 UTC"}, Hostname:"ip-172-31-31-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.782 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.782 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.782 [INFO][5460] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-125' May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.802 [INFO][5460] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" host="ip-172-31-31-125" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.820 [INFO][5460] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-125" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.856 [INFO][5460] ipam/ipam.go 511: Trying affinity for 192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.862 [INFO][5460] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.867 [INFO][5460] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.868 [INFO][5460] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" host="ip-172-31-31-125" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.872 [INFO][5460] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.890 [INFO][5460] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" host="ip-172-31-31-125" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.899 [INFO][5460] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.131/26] block=192.168.75.128/26 handle="k8s-pod-network.42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" host="ip-172-31-31-125" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.899 [INFO][5460] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.131/26] handle="k8s-pod-network.42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" host="ip-172-31-31-125" May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.899 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:15.006887 containerd[2119]: 2025-05-17 00:26:14.899 [INFO][5460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.131/26] IPv6=[] ContainerID="42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" HandleID="k8s-pod-network.42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:15.009594 containerd[2119]: 2025-05-17 00:26:14.910 [INFO][5433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" Namespace="calico-system" Pod="csi-node-driver-hhsvr" WorkloadEndpoint="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9548793e-04a2-4303-8663-86deb887e61f", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"", Pod:"csi-node-driver-hhsvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5776d90a4bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:15.009594 containerd[2119]: 2025-05-17 00:26:14.912 [INFO][5433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.131/32] ContainerID="42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" Namespace="calico-system" Pod="csi-node-driver-hhsvr" WorkloadEndpoint="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:15.009594 containerd[2119]: 2025-05-17 00:26:14.912 [INFO][5433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5776d90a4bd ContainerID="42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" Namespace="calico-system" Pod="csi-node-driver-hhsvr" WorkloadEndpoint="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:15.009594 containerd[2119]: 2025-05-17 00:26:14.964 [INFO][5433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" Namespace="calico-system" Pod="csi-node-driver-hhsvr" WorkloadEndpoint="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:15.009594 containerd[2119]: 2025-05-17 00:26:14.967 [INFO][5433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" Namespace="calico-system" Pod="csi-node-driver-hhsvr" WorkloadEndpoint="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9548793e-04a2-4303-8663-86deb887e61f", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b", Pod:"csi-node-driver-hhsvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5776d90a4bd", MAC:"2e:49:a6:4c:b9:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:15.009594 containerd[2119]: 2025-05-17 00:26:14.994 [INFO][5433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b" Namespace="calico-system" Pod="csi-node-driver-hhsvr" WorkloadEndpoint="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:15.099062 systemd-networkd[1648]: calia92f5acd775: Link UP May 17 00:26:15.101637 systemd-networkd[1648]: calia92f5acd775: Gained carrier May 17 00:26:15.107611 containerd[2119]: time="2025-05-17T00:26:15.105919376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:15.107611 containerd[2119]: time="2025-05-17T00:26:15.105991457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:15.107611 containerd[2119]: time="2025-05-17T00:26:15.106009307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:15.107611 containerd[2119]: time="2025-05-17T00:26:15.106389488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:14.840 [INFO][5466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0 coredns-7c65d6cfc9- kube-system c4fe4f39-7918-4903-9c97-2e02a23b49cc 982 0 2025-05-17 00:25:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-125 coredns-7c65d6cfc9-m5q6w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia92f5acd775 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5q6w" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:14.841 [INFO][5466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5q6w" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.004 [INFO][5491] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" HandleID="k8s-pod-network.978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.005 [INFO][5491] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" HandleID="k8s-pod-network.978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003322d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-125", "pod":"coredns-7c65d6cfc9-m5q6w", "timestamp":"2025-05-17 00:26:15.004556559 +0000 UTC"}, Hostname:"ip-172-31-31-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.005 [INFO][5491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.005 [INFO][5491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.005 [INFO][5491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-125' May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.017 [INFO][5491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" host="ip-172-31-31-125" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.024 [INFO][5491] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-125" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.038 [INFO][5491] ipam/ipam.go 511: Trying affinity for 192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.043 [INFO][5491] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.049 [INFO][5491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.049 [INFO][5491] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" host="ip-172-31-31-125" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.055 [INFO][5491] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.063 [INFO][5491] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" host="ip-172-31-31-125" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.076 [INFO][5491] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.132/26] block=192.168.75.128/26 handle="k8s-pod-network.978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" host="ip-172-31-31-125" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.076 [INFO][5491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.132/26] handle="k8s-pod-network.978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" host="ip-172-31-31-125" May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.077 [INFO][5491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:15.129078 containerd[2119]: 2025-05-17 00:26:15.077 [INFO][5491] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.132/26] IPv6=[] ContainerID="978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" HandleID="k8s-pod-network.978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:15.130534 containerd[2119]: 2025-05-17 00:26:15.086 [INFO][5466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5q6w" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c4fe4f39-7918-4903-9c97-2e02a23b49cc", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"", Pod:"coredns-7c65d6cfc9-m5q6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia92f5acd775", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:15.130534 containerd[2119]: 2025-05-17 00:26:15.086 [INFO][5466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.132/32] ContainerID="978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5q6w" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:15.130534 containerd[2119]: 2025-05-17 00:26:15.086 [INFO][5466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia92f5acd775 ContainerID="978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5q6w" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:15.130534 containerd[2119]: 2025-05-17 00:26:15.101 [INFO][5466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5q6w" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:15.130534 containerd[2119]: 2025-05-17 00:26:15.101 [INFO][5466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5q6w" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c4fe4f39-7918-4903-9c97-2e02a23b49cc", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c", Pod:"coredns-7c65d6cfc9-m5q6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia92f5acd775", MAC:"aa:e6:06:0c:39:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:15.130534 containerd[2119]: 2025-05-17 00:26:15.124 [INFO][5466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5q6w" WorkloadEndpoint="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:15.228688 systemd-networkd[1648]: calib7def621fb7: Link UP May 17 00:26:15.230445 containerd[2119]: time="2025-05-17T00:26:15.228925787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:15.230445 containerd[2119]: time="2025-05-17T00:26:15.228986111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:15.230445 containerd[2119]: time="2025-05-17T00:26:15.229026233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:15.230445 containerd[2119]: time="2025-05-17T00:26:15.229155283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:15.231713 systemd-networkd[1648]: calib7def621fb7: Gained carrier May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:14.773 [INFO][5442] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0 goldmane-8f77d7b6c- calico-system f87fae28-48af-42f8-92bd-1ecd569fff56 978 0 2025-05-17 00:25:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-31-125 goldmane-8f77d7b6c-fm6b4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib7def621fb7 [] [] }} ContainerID="18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" Namespace="calico-system" Pod="goldmane-8f77d7b6c-fm6b4" WorkloadEndpoint="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:14.774 [INFO][5442] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" Namespace="calico-system" Pod="goldmane-8f77d7b6c-fm6b4" WorkloadEndpoint="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.042 [INFO][5489] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" HandleID="k8s-pod-network.18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.043 [INFO][5489] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" HandleID="k8s-pod-network.18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033fac0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-125", "pod":"goldmane-8f77d7b6c-fm6b4", "timestamp":"2025-05-17 00:26:15.042901177 +0000 UTC"}, Hostname:"ip-172-31-31-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.043 [INFO][5489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.080 [INFO][5489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.081 [INFO][5489] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-125' May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.119 [INFO][5489] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" host="ip-172-31-31-125" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.135 [INFO][5489] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-125" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.145 [INFO][5489] ipam/ipam.go 511: Trying affinity for 192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.149 [INFO][5489] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.161 [INFO][5489] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.161 [INFO][5489] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" host="ip-172-31-31-125" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.164 [INFO][5489] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.177 [INFO][5489] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" host="ip-172-31-31-125" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.191 [INFO][5489] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.133/26] block=192.168.75.128/26 handle="k8s-pod-network.18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" host="ip-172-31-31-125" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.191 [INFO][5489] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.133/26] handle="k8s-pod-network.18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" host="ip-172-31-31-125" May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.191 [INFO][5489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:15.267185 containerd[2119]: 2025-05-17 00:26:15.191 [INFO][5489] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.133/26] IPv6=[] ContainerID="18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" HandleID="k8s-pod-network.18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:15.273475 containerd[2119]: 2025-05-17 00:26:15.217 [INFO][5442] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" Namespace="calico-system" Pod="goldmane-8f77d7b6c-fm6b4" WorkloadEndpoint="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"f87fae28-48af-42f8-92bd-1ecd569fff56", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"", Pod:"goldmane-8f77d7b6c-fm6b4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib7def621fb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:15.273475 containerd[2119]: 2025-05-17 00:26:15.219 [INFO][5442] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.133/32] ContainerID="18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" Namespace="calico-system" Pod="goldmane-8f77d7b6c-fm6b4" WorkloadEndpoint="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:15.273475 containerd[2119]: 2025-05-17 00:26:15.220 [INFO][5442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7def621fb7 ContainerID="18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" Namespace="calico-system" Pod="goldmane-8f77d7b6c-fm6b4" WorkloadEndpoint="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:15.273475 containerd[2119]: 2025-05-17 00:26:15.231 [INFO][5442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" Namespace="calico-system" Pod="goldmane-8f77d7b6c-fm6b4" WorkloadEndpoint="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:15.273475 containerd[2119]: 2025-05-17 00:26:15.234 [INFO][5442] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" Namespace="calico-system" Pod="goldmane-8f77d7b6c-fm6b4" WorkloadEndpoint="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"f87fae28-48af-42f8-92bd-1ecd569fff56", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee", Pod:"goldmane-8f77d7b6c-fm6b4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib7def621fb7", MAC:"42:cd:b6:eb:35:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:15.273475 containerd[2119]: 2025-05-17 00:26:15.259 [INFO][5442] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee" Namespace="calico-system" Pod="goldmane-8f77d7b6c-fm6b4" WorkloadEndpoint="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:15.311799 systemd[1]: run-netns-cni\x2d579c2941\x2d8eb0\x2dd454\x2d8230\x2dd65a2d251ba6.mount: Deactivated successfully. May 17 00:26:15.311999 systemd[1]: run-netns-cni\x2d588a6155\x2d592d\x2d4387\x2d7b6d\x2dc8de2928eefa.mount: Deactivated successfully. May 17 00:26:15.361815 systemd-networkd[1648]: cali7087658358d: Link UP May 17 00:26:15.367666 systemd-networkd[1648]: cali7087658358d: Gained carrier May 17 00:26:15.368659 containerd[2119]: time="2025-05-17T00:26:15.366759279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhsvr,Uid:9548793e-04a2-4303-8663-86deb887e61f,Namespace:calico-system,Attempt:1,} returns sandbox id \"42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b\"" May 17 00:26:15.399603 containerd[2119]: time="2025-05-17T00:26:15.398902996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:26:15.420110 containerd[2119]: time="2025-05-17T00:26:15.418149395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:15.420110 containerd[2119]: time="2025-05-17T00:26:15.418237826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:15.420110 containerd[2119]: time="2025-05-17T00:26:15.418277077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:15.420110 containerd[2119]: time="2025-05-17T00:26:15.418446218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:14.851 [INFO][5453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0 calico-apiserver-58fb97568c- calico-apiserver e097cbd3-9914-4403-a492-af7b73e56564 981 0 2025-05-17 00:25:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58fb97568c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-125 calico-apiserver-58fb97568c-9q2hm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7087658358d [] [] }} ContainerID="724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-9q2hm" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:14.852 [INFO][5453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-9q2hm" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.057 [INFO][5496] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" HandleID="k8s-pod-network.724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.059 [INFO][5496] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" HandleID="k8s-pod-network.724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f280), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-125", "pod":"calico-apiserver-58fb97568c-9q2hm", "timestamp":"2025-05-17 00:26:15.057836358 +0000 UTC"}, Hostname:"ip-172-31-31-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.060 [INFO][5496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.195 [INFO][5496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.195 [INFO][5496] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-125' May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.223 [INFO][5496] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" host="ip-172-31-31-125" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.244 [INFO][5496] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-125" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.257 [INFO][5496] ipam/ipam.go 511: Trying affinity for 192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.265 [INFO][5496] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.276 [INFO][5496] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.276 [INFO][5496] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" host="ip-172-31-31-125" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.279 [INFO][5496] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.291 [INFO][5496] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" host="ip-172-31-31-125" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.325 [INFO][5496] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.134/26] block=192.168.75.128/26 handle="k8s-pod-network.724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" host="ip-172-31-31-125" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.325 [INFO][5496] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.134/26] handle="k8s-pod-network.724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" host="ip-172-31-31-125" May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.325 [INFO][5496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:15.444438 containerd[2119]: 2025-05-17 00:26:15.325 [INFO][5496] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.134/26] IPv6=[] ContainerID="724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" HandleID="k8s-pod-network.724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:15.447795 containerd[2119]: 2025-05-17 00:26:15.343 [INFO][5453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-9q2hm" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0", GenerateName:"calico-apiserver-58fb97568c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e097cbd3-9914-4403-a492-af7b73e56564", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58fb97568c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"", Pod:"calico-apiserver-58fb97568c-9q2hm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7087658358d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:15.447795 containerd[2119]: 2025-05-17 00:26:15.351 [INFO][5453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.134/32] ContainerID="724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-9q2hm" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:15.447795 containerd[2119]: 2025-05-17 00:26:15.352 [INFO][5453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7087658358d ContainerID="724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-9q2hm" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:15.447795 containerd[2119]: 2025-05-17 00:26:15.362 [INFO][5453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-9q2hm" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:15.447795 containerd[2119]: 2025-05-17 00:26:15.374 [INFO][5453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-9q2hm" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0", GenerateName:"calico-apiserver-58fb97568c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e097cbd3-9914-4403-a492-af7b73e56564", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58fb97568c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba", Pod:"calico-apiserver-58fb97568c-9q2hm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7087658358d", MAC:"ee:bd:85:51:0d:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:15.447795 containerd[2119]: 2025-05-17 00:26:15.421 [INFO][5453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-9q2hm" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:15.494602 containerd[2119]: time="2025-05-17T00:26:15.489848587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5q6w,Uid:c4fe4f39-7918-4903-9c97-2e02a23b49cc,Namespace:kube-system,Attempt:1,} returns sandbox id \"978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c\"" May 17 00:26:15.500443 containerd[2119]: time="2025-05-17T00:26:15.496737849Z" level=info msg="CreateContainer within sandbox \"978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:26:15.534422 containerd[2119]: time="2025-05-17T00:26:15.532102757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:15.534422 containerd[2119]: time="2025-05-17T00:26:15.533085234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:15.534422 containerd[2119]: time="2025-05-17T00:26:15.533105069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:15.534422 containerd[2119]: time="2025-05-17T00:26:15.533209102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:15.545550 containerd[2119]: time="2025-05-17T00:26:15.545501271Z" level=info msg="CreateContainer within sandbox \"978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2fdd18e88ac036d773a41efd3bfc0f471505d01733849b914e80b13ef5954574\"" May 17 00:26:15.548115 containerd[2119]: time="2025-05-17T00:26:15.546515152Z" level=info msg="StartContainer for \"2fdd18e88ac036d773a41efd3bfc0f471505d01733849b914e80b13ef5954574\"" May 17 00:26:15.630685 containerd[2119]: time="2025-05-17T00:26:15.630175374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-fm6b4,Uid:f87fae28-48af-42f8-92bd-1ecd569fff56,Namespace:calico-system,Attempt:1,} returns sandbox id \"18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee\"" May 17 00:26:15.672371 containerd[2119]: time="2025-05-17T00:26:15.669967695Z" level=info msg="StartContainer for \"2fdd18e88ac036d773a41efd3bfc0f471505d01733849b914e80b13ef5954574\" returns successfully" May 17 00:26:15.672371 containerd[2119]: time="2025-05-17T00:26:15.670201187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58fb97568c-9q2hm,Uid:e097cbd3-9914-4403-a492-af7b73e56564,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba\"" May 17 00:26:15.782071 kubelet[3368]: I0517 00:26:15.782020 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-m5q6w" podStartSLOduration=45.782002142 podStartE2EDuration="45.782002142s" podCreationTimestamp="2025-05-17 00:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:26:15.777055953 +0000 UTC m=+50.758782690" watchObservedRunningTime="2025-05-17 00:26:15.782002142 +0000 UTC m=+50.763728880" May 17 00:26:15.897423 systemd[1]: Started sshd@10-172.31.31.125:22-147.75.109.163:43156.service - OpenSSH per-connection server daemon (147.75.109.163:43156). May 17 00:26:16.094878 sshd[5750]: Accepted publickey for core from 147.75.109.163 port 43156 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:16.098804 sshd[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:16.105187 systemd-logind[2074]: New session 11 of user core. May 17 00:26:16.109390 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:26:16.181782 containerd[2119]: time="2025-05-17T00:26:16.179742521Z" level=info msg="StopPodSandbox for \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\"" May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.311 [INFO][5770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.312 [INFO][5770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" iface="eth0" netns="/var/run/netns/cni-a8be08ef-636e-7da1-e881-535e908fd76c" May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.312 [INFO][5770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" iface="eth0" netns="/var/run/netns/cni-a8be08ef-636e-7da1-e881-535e908fd76c" May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.313 [INFO][5770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" iface="eth0" netns="/var/run/netns/cni-a8be08ef-636e-7da1-e881-535e908fd76c" May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.313 [INFO][5770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.313 [INFO][5770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.350 [INFO][5784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" HandleID="k8s-pod-network.4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.350 [INFO][5784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.350 [INFO][5784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.362 [WARNING][5784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" HandleID="k8s-pod-network.4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.362 [INFO][5784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" HandleID="k8s-pod-network.4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.369 [INFO][5784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:16.374873 containerd[2119]: 2025-05-17 00:26:16.372 [INFO][5770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:16.376673 containerd[2119]: time="2025-05-17T00:26:16.376055325Z" level=info msg="TearDown network for sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\" successfully" May 17 00:26:16.376673 containerd[2119]: time="2025-05-17T00:26:16.376099545Z" level=info msg="StopPodSandbox for \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\" returns successfully" May 17 00:26:16.380310 containerd[2119]: time="2025-05-17T00:26:16.379813826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-db6d855c8-9lxqb,Uid:739e36e0-8a50-4381-84dc-d3473d61c58e,Namespace:calico-system,Attempt:1,}" May 17 00:26:16.380519 systemd[1]: run-netns-cni\x2da8be08ef\x2d636e\x2d7da1\x2de881\x2d535e908fd76c.mount: Deactivated successfully. May 17 00:26:16.382082 systemd-networkd[1648]: calia92f5acd775: Gained IPv6LL May 17 00:26:16.444814 systemd-networkd[1648]: cali5776d90a4bd: Gained IPv6LL May 17 00:26:16.508854 systemd-networkd[1648]: cali7087658358d: Gained IPv6LL May 17 00:26:16.693770 systemd-networkd[1648]: calibabe576789d: Link UP May 17 00:26:16.702896 systemd-networkd[1648]: calibabe576789d: Gained carrier May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.468 [INFO][5791] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0 calico-kube-controllers-db6d855c8- calico-system 739e36e0-8a50-4381-84dc-d3473d61c58e 1020 0 2025-05-17 00:25:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:db6d855c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-125 calico-kube-controllers-db6d855c8-9lxqb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibabe576789d [] [] }} ContainerID="390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" Namespace="calico-system" Pod="calico-kube-controllers-db6d855c8-9lxqb" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.468 [INFO][5791] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" Namespace="calico-system" Pod="calico-kube-controllers-db6d855c8-9lxqb" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.555 [INFO][5805] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" HandleID="k8s-pod-network.390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.555 [INFO][5805] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" HandleID="k8s-pod-network.390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000234fc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-125", "pod":"calico-kube-controllers-db6d855c8-9lxqb", "timestamp":"2025-05-17 00:26:16.555233104 +0000 UTC"}, Hostname:"ip-172-31-31-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.555 [INFO][5805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.555 [INFO][5805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.556 [INFO][5805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-125' May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.565 [INFO][5805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" host="ip-172-31-31-125" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.574 [INFO][5805] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-125" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.596 [INFO][5805] ipam/ipam.go 511: Trying affinity for 192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.608 [INFO][5805] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.616 [INFO][5805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.617 [INFO][5805] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" host="ip-172-31-31-125" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.626 [INFO][5805] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.637 [INFO][5805] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" host="ip-172-31-31-125" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.666 [INFO][5805] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.135/26] block=192.168.75.128/26 handle="k8s-pod-network.390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" host="ip-172-31-31-125" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.666 [INFO][5805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.135/26] handle="k8s-pod-network.390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" host="ip-172-31-31-125" May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.666 [INFO][5805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:16.758223 containerd[2119]: 2025-05-17 00:26:16.666 [INFO][5805] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.135/26] IPv6=[] ContainerID="390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" HandleID="k8s-pod-network.390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:16.764222 containerd[2119]: 2025-05-17 00:26:16.675 [INFO][5791] cni-plugin/k8s.go 418: Populated endpoint ContainerID="390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" Namespace="calico-system" Pod="calico-kube-controllers-db6d855c8-9lxqb" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0", GenerateName:"calico-kube-controllers-db6d855c8-", Namespace:"calico-system", SelfLink:"", UID:"739e36e0-8a50-4381-84dc-d3473d61c58e", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"db6d855c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"", Pod:"calico-kube-controllers-db6d855c8-9lxqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibabe576789d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:16.764222 containerd[2119]: 2025-05-17 00:26:16.675 [INFO][5791] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.135/32] ContainerID="390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" Namespace="calico-system" Pod="calico-kube-controllers-db6d855c8-9lxqb" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:16.764222 containerd[2119]: 2025-05-17 00:26:16.675 [INFO][5791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibabe576789d ContainerID="390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" Namespace="calico-system" Pod="calico-kube-controllers-db6d855c8-9lxqb" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:16.764222 containerd[2119]: 2025-05-17 00:26:16.693 [INFO][5791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" Namespace="calico-system" Pod="calico-kube-controllers-db6d855c8-9lxqb" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:16.764222 containerd[2119]: 2025-05-17 00:26:16.694 [INFO][5791] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" Namespace="calico-system" Pod="calico-kube-controllers-db6d855c8-9lxqb" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0", GenerateName:"calico-kube-controllers-db6d855c8-", Namespace:"calico-system", SelfLink:"", UID:"739e36e0-8a50-4381-84dc-d3473d61c58e", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"db6d855c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c", Pod:"calico-kube-controllers-db6d855c8-9lxqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibabe576789d", MAC:"3e:7c:7e:0e:e5:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:16.764222 containerd[2119]: 2025-05-17 00:26:16.741 [INFO][5791] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c" Namespace="calico-system" Pod="calico-kube-controllers-db6d855c8-9lxqb" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:16.769395 systemd-networkd[1648]: calib7def621fb7: Gained IPv6LL May 17 00:26:16.837417 sshd[5750]: pam_unix(sshd:session): session closed for user core May 17 00:26:16.847789 systemd[1]: sshd@10-172.31.31.125:22-147.75.109.163:43156.service: Deactivated successfully. May 17 00:26:16.860244 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:26:16.861663 systemd-logind[2074]: Session 11 logged out. Waiting for processes to exit. May 17 00:26:16.865973 systemd-logind[2074]: Removed session 11. May 17 00:26:16.889680 containerd[2119]: time="2025-05-17T00:26:16.888737110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:16.889680 containerd[2119]: time="2025-05-17T00:26:16.888811166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:16.889680 containerd[2119]: time="2025-05-17T00:26:16.888835647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:16.889680 containerd[2119]: time="2025-05-17T00:26:16.888949088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:17.056141 containerd[2119]: time="2025-05-17T00:26:17.056099336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-db6d855c8-9lxqb,Uid:739e36e0-8a50-4381-84dc-d3473d61c58e,Namespace:calico-system,Attempt:1,} returns sandbox id \"390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c\"" May 17 00:26:17.129435 containerd[2119]: time="2025-05-17T00:26:17.129383687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:17.130276 containerd[2119]: time="2025-05-17T00:26:17.130222021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:26:17.131253 containerd[2119]: time="2025-05-17T00:26:17.131186268Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:17.133663 containerd[2119]: time="2025-05-17T00:26:17.133599839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:17.134423 containerd[2119]: time="2025-05-17T00:26:17.134238311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.735288717s" May 17 00:26:17.134423 containerd[2119]: time="2025-05-17T00:26:17.134275429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:26:17.138812 containerd[2119]: time="2025-05-17T00:26:17.138660914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:26:17.142446 containerd[2119]: time="2025-05-17T00:26:17.141536287Z" level=info msg="CreateContainer within sandbox \"42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:26:17.171308 containerd[2119]: time="2025-05-17T00:26:17.171263997Z" level=info msg="CreateContainer within sandbox \"42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2804af81d16c6c916a34829728d6573b8a683be26cc323e3b12d940741a3da40\"" May 17 00:26:17.173902 containerd[2119]: time="2025-05-17T00:26:17.172068620Z" level=info msg="StartContainer for \"2804af81d16c6c916a34829728d6573b8a683be26cc323e3b12d940741a3da40\"" May 17 00:26:17.183748 containerd[2119]: time="2025-05-17T00:26:17.182694552Z" level=info msg="StopPodSandbox for \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\"" May 17 00:26:17.340674 containerd[2119]: time="2025-05-17T00:26:17.339154539Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:17.351615 containerd[2119]: time="2025-05-17T00:26:17.349811575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:17.351615 containerd[2119]: time="2025-05-17T00:26:17.349994852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:26:17.351786 kubelet[3368]: E0517 00:26:17.350223 3368 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:17.351786 kubelet[3368]: E0517 00:26:17.350277 3368 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:17.353267 kubelet[3368]: E0517 00:26:17.352837 3368 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d9ntn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-fm6b4_calico-system(f87fae28-48af-42f8-92bd-1ecd569fff56): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:17.354427 kubelet[3368]: E0517 00:26:17.354062 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-fm6b4" podUID="f87fae28-48af-42f8-92bd-1ecd569fff56" May 17 00:26:17.356719 containerd[2119]: time="2025-05-17T00:26:17.356683761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:26:17.382118 containerd[2119]: time="2025-05-17T00:26:17.379008390Z" level=info msg="StartContainer for \"2804af81d16c6c916a34829728d6573b8a683be26cc323e3b12d940741a3da40\" returns successfully" May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.315 [INFO][5892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.315 [INFO][5892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" iface="eth0" netns="/var/run/netns/cni-1e289866-f282-f293-5af3-0ce34c9e8118" May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.315 [INFO][5892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" iface="eth0" netns="/var/run/netns/cni-1e289866-f282-f293-5af3-0ce34c9e8118" May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.318 [INFO][5892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" iface="eth0" netns="/var/run/netns/cni-1e289866-f282-f293-5af3-0ce34c9e8118" May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.318 [INFO][5892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.318 [INFO][5892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.399 [INFO][5918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" HandleID="k8s-pod-network.4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.400 [INFO][5918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.400 [INFO][5918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.406 [WARNING][5918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" HandleID="k8s-pod-network.4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.406 [INFO][5918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" HandleID="k8s-pod-network.4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.407 [INFO][5918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:17.411628 containerd[2119]: 2025-05-17 00:26:17.409 [INFO][5892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:17.412641 containerd[2119]: time="2025-05-17T00:26:17.411825845Z" level=info msg="TearDown network for sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\" successfully" May 17 00:26:17.412641 containerd[2119]: time="2025-05-17T00:26:17.411849765Z" level=info msg="StopPodSandbox for \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\" returns successfully" May 17 00:26:17.413641 containerd[2119]: time="2025-05-17T00:26:17.412705439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58fb97568c-2dtmf,Uid:c7e5c708-6f1e-4a6a-8224-1c84baaaea1e,Namespace:calico-apiserver,Attempt:1,}" May 17 00:26:17.416371 systemd[1]: run-netns-cni\x2d1e289866\x2df282\x2df293\x2d5af3\x2d0ce34c9e8118.mount: Deactivated successfully. May 17 00:26:17.581911 systemd-networkd[1648]: cali803bd4c7225: Link UP May 17 00:26:17.583251 systemd-networkd[1648]: cali803bd4c7225: Gained carrier May 17 00:26:17.602887 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:17.597566 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:17.597630 systemd-resolved[1975]: Flushed all caches. May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.497 [INFO][5935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0 calico-apiserver-58fb97568c- calico-apiserver c7e5c708-6f1e-4a6a-8224-1c84baaaea1e 1033 0 2025-05-17 00:25:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58fb97568c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-125 calico-apiserver-58fb97568c-2dtmf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali803bd4c7225 [] [] }} ContainerID="855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-2dtmf" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.497 [INFO][5935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-2dtmf" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.529 [INFO][5946] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" HandleID="k8s-pod-network.855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.530 [INFO][5946] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" HandleID="k8s-pod-network.855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-125", "pod":"calico-apiserver-58fb97568c-2dtmf", "timestamp":"2025-05-17 00:26:17.529820413 +0000 UTC"}, Hostname:"ip-172-31-31-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.530 [INFO][5946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.530 [INFO][5946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.530 [INFO][5946] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-125' May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.537 [INFO][5946] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" host="ip-172-31-31-125" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.543 [INFO][5946] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-125" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.548 [INFO][5946] ipam/ipam.go 511: Trying affinity for 192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.550 [INFO][5946] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.554 [INFO][5946] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-31-125" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.554 [INFO][5946] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" host="ip-172-31-31-125" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.556 [INFO][5946] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85 May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.564 [INFO][5946] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" host="ip-172-31-31-125" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.574 [INFO][5946] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.136/26] block=192.168.75.128/26 handle="k8s-pod-network.855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" host="ip-172-31-31-125" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.574 [INFO][5946] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.136/26] handle="k8s-pod-network.855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" host="ip-172-31-31-125" May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.574 [INFO][5946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:17.619699 containerd[2119]: 2025-05-17 00:26:17.574 [INFO][5946] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.136/26] IPv6=[] ContainerID="855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" HandleID="k8s-pod-network.855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:17.623261 containerd[2119]: 2025-05-17 00:26:17.576 [INFO][5935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-2dtmf" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0", GenerateName:"calico-apiserver-58fb97568c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7e5c708-6f1e-4a6a-8224-1c84baaaea1e", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58fb97568c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"", Pod:"calico-apiserver-58fb97568c-2dtmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali803bd4c7225", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:17.623261 containerd[2119]: 2025-05-17 00:26:17.577 [INFO][5935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.136/32] ContainerID="855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-2dtmf" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:17.623261 containerd[2119]: 2025-05-17 00:26:17.577 [INFO][5935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali803bd4c7225 ContainerID="855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-2dtmf" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:17.623261 containerd[2119]: 2025-05-17 00:26:17.584 [INFO][5935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-2dtmf" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:17.623261 containerd[2119]: 2025-05-17 00:26:17.586 [INFO][5935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-2dtmf" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0", GenerateName:"calico-apiserver-58fb97568c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7e5c708-6f1e-4a6a-8224-1c84baaaea1e", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58fb97568c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85", Pod:"calico-apiserver-58fb97568c-2dtmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali803bd4c7225", MAC:"66:22:53:c9:48:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:17.623261 containerd[2119]: 2025-05-17 00:26:17.606 [INFO][5935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85" Namespace="calico-apiserver" Pod="calico-apiserver-58fb97568c-2dtmf" WorkloadEndpoint="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:17.654563 containerd[2119]: time="2025-05-17T00:26:17.654422746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:17.654563 containerd[2119]: time="2025-05-17T00:26:17.654482599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:17.654563 containerd[2119]: time="2025-05-17T00:26:17.654498138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:17.655057 containerd[2119]: time="2025-05-17T00:26:17.654870026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:17.725273 containerd[2119]: time="2025-05-17T00:26:17.725232676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58fb97568c-2dtmf,Uid:c7e5c708-6f1e-4a6a-8224-1c84baaaea1e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85\"" May 17 00:26:17.822840 kubelet[3368]: E0517 00:26:17.822539 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-fm6b4" podUID="f87fae28-48af-42f8-92bd-1ecd569fff56" May 17 00:26:17.980832 systemd-networkd[1648]: calibabe576789d: Gained IPv6LL May 17 00:26:18.877356 systemd-networkd[1648]: cali803bd4c7225: Gained IPv6LL May 17 00:26:20.831013 containerd[2119]: time="2025-05-17T00:26:20.830962855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:20.832149 containerd[2119]: time="2025-05-17T00:26:20.831924624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:26:20.833518 containerd[2119]: time="2025-05-17T00:26:20.833283362Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:20.837478 containerd[2119]: time="2025-05-17T00:26:20.837440912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:20.838332 containerd[2119]: time="2025-05-17T00:26:20.838302891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.481575489s" May 17 00:26:20.838416 containerd[2119]: time="2025-05-17T00:26:20.838334808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:26:20.844429 containerd[2119]: time="2025-05-17T00:26:20.844296633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:26:20.847062 containerd[2119]: time="2025-05-17T00:26:20.847010197Z" level=info msg="CreateContainer within sandbox \"724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:26:20.872602 containerd[2119]: time="2025-05-17T00:26:20.872529769Z" level=info msg="CreateContainer within sandbox \"724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a86f890881dafc8d99224be1b187145cb3cf37d69b472098008eba0e5128458e\"" May 17 00:26:20.873780 containerd[2119]: time="2025-05-17T00:26:20.873724597Z" level=info msg="StartContainer for \"a86f890881dafc8d99224be1b187145cb3cf37d69b472098008eba0e5128458e\"" May 17 00:26:20.969508 containerd[2119]: time="2025-05-17T00:26:20.969465368Z" level=info msg="StartContainer for \"a86f890881dafc8d99224be1b187145cb3cf37d69b472098008eba0e5128458e\" returns successfully" May 17 00:26:21.295992 ntpd[2055]: Listen normally on 6 vxlan.calico 192.168.75.128:123 May 17 00:26:21.297457 ntpd[2055]: 17 May 00:26:21 ntpd[2055]: Listen normally on 6 vxlan.calico 192.168.75.128:123 May 17 00:26:21.297457 ntpd[2055]: 17 May 00:26:21 ntpd[2055]: Listen normally on 7 cali6c00f04b9df [fe80::ecee:eeff:feee:eeee%4]:123 May 17 00:26:21.297457 ntpd[2055]: 17 May 00:26:21 ntpd[2055]: Listen normally on 8 vxlan.calico [fe80::64aa:13ff:fe98:2d3%5]:123 May 17 00:26:21.297457 ntpd[2055]: 17 May 00:26:21 ntpd[2055]: Listen normally on 9 cali80657e86d5c [fe80::ecee:eeff:feee:eeee%8]:123 May 17 00:26:21.297457 ntpd[2055]: 17 May 00:26:21 ntpd[2055]: Listen normally on 10 cali5776d90a4bd [fe80::ecee:eeff:feee:eeee%9]:123 May 17 00:26:21.297457 ntpd[2055]: 17 May 00:26:21 ntpd[2055]: Listen normally on 11 calia92f5acd775 [fe80::ecee:eeff:feee:eeee%10]:123 May 17 00:26:21.297457 ntpd[2055]: 17 May 00:26:21 ntpd[2055]: Listen normally on 12 calib7def621fb7 [fe80::ecee:eeff:feee:eeee%11]:123 May 17 00:26:21.297457 ntpd[2055]: 17 May 00:26:21 ntpd[2055]: Listen normally on 13 cali7087658358d [fe80::ecee:eeff:feee:eeee%12]:123 May 17 00:26:21.297457 ntpd[2055]: 17 May 00:26:21 ntpd[2055]: Listen normally on 14 calibabe576789d [fe80::ecee:eeff:feee:eeee%13]:123 May 17 00:26:21.297457 ntpd[2055]: 17 May 00:26:21 ntpd[2055]: Listen normally on 15 cali803bd4c7225 [fe80::ecee:eeff:feee:eeee%14]:123 May 17 00:26:21.296076 ntpd[2055]: Listen normally on 7 cali6c00f04b9df [fe80::ecee:eeff:feee:eeee%4]:123 May 17 00:26:21.296667 ntpd[2055]: Listen normally on 8 vxlan.calico [fe80::64aa:13ff:fe98:2d3%5]:123 May 17 00:26:21.296704 ntpd[2055]: Listen normally on 9 cali80657e86d5c [fe80::ecee:eeff:feee:eeee%8]:123 May 17 00:26:21.296739 ntpd[2055]: Listen normally on 10 cali5776d90a4bd [fe80::ecee:eeff:feee:eeee%9]:123 May 17 00:26:21.296775 ntpd[2055]: Listen normally on 11 calia92f5acd775 [fe80::ecee:eeff:feee:eeee%10]:123 May 17 00:26:21.296818 ntpd[2055]: Listen normally on 12 calib7def621fb7 [fe80::ecee:eeff:feee:eeee%11]:123 May 17 00:26:21.296853 ntpd[2055]: Listen normally on 13 cali7087658358d [fe80::ecee:eeff:feee:eeee%12]:123 May 17 00:26:21.296891 ntpd[2055]: Listen normally on 14 calibabe576789d [fe80::ecee:eeff:feee:eeee%13]:123 May 17 00:26:21.296925 ntpd[2055]: Listen normally on 15 cali803bd4c7225 [fe80::ecee:eeff:feee:eeee%14]:123 May 17 00:26:21.868478 systemd[1]: run-containerd-runc-k8s.io-eee30c2c60c12bdac4be702c411f1aa3b775fe4826dc72af95040f2dcb129ff6-runc.JG7Fqj.mount: Deactivated successfully. May 17 00:26:21.882017 systemd[1]: Started sshd@11-172.31.31.125:22-147.75.109.163:40040.service - OpenSSH per-connection server daemon (147.75.109.163:40040). May 17 00:26:22.063956 kubelet[3368]: I0517 00:26:22.062643 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58fb97568c-9q2hm" podStartSLOduration=35.890908868 podStartE2EDuration="41.062620505s" podCreationTimestamp="2025-05-17 00:25:41 +0000 UTC" firstStartedPulling="2025-05-17 00:26:15.672441716 +0000 UTC m=+50.654168442" lastFinishedPulling="2025-05-17 00:26:20.844153362 +0000 UTC m=+55.825880079" observedRunningTime="2025-05-17 00:26:21.851001018 +0000 UTC m=+56.832727755" watchObservedRunningTime="2025-05-17 00:26:22.062620505 +0000 UTC m=+57.044347240" May 17 00:26:22.142575 sshd[6080]: Accepted publickey for core from 147.75.109.163 port 40040 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:22.146460 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:22.158799 systemd-logind[2074]: New session 12 of user core. May 17 00:26:22.163142 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:26:23.615817 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:23.614313 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:23.614354 systemd-resolved[1975]: Flushed all caches. May 17 00:26:23.643805 sshd[6080]: pam_unix(sshd:session): session closed for user core May 17 00:26:23.654413 systemd[1]: sshd@11-172.31.31.125:22-147.75.109.163:40040.service: Deactivated successfully. May 17 00:26:23.668899 systemd-logind[2074]: Session 12 logged out. Waiting for processes to exit. May 17 00:26:23.685204 systemd[1]: Started sshd@12-172.31.31.125:22-147.75.109.163:40056.service - OpenSSH per-connection server daemon (147.75.109.163:40056). May 17 00:26:23.685840 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:26:23.692816 systemd-logind[2074]: Removed session 12. May 17 00:26:23.880066 sshd[6106]: Accepted publickey for core from 147.75.109.163 port 40056 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:23.892898 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:23.904060 systemd-logind[2074]: New session 13 of user core. May 17 00:26:23.911905 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:26:24.616814 sshd[6106]: pam_unix(sshd:session): session closed for user core May 17 00:26:24.626155 systemd[1]: sshd@12-172.31.31.125:22-147.75.109.163:40056.service: Deactivated successfully. May 17 00:26:24.645982 systemd-logind[2074]: Session 13 logged out. Waiting for processes to exit. May 17 00:26:24.664183 systemd[1]: Started sshd@13-172.31.31.125:22-147.75.109.163:40066.service - OpenSSH per-connection server daemon (147.75.109.163:40066). May 17 00:26:24.664687 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:26:24.675493 systemd-logind[2074]: Removed session 13. May 17 00:26:24.868261 sshd[6118]: Accepted publickey for core from 147.75.109.163 port 40066 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:24.871264 sshd[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:24.882242 systemd-logind[2074]: New session 14 of user core. May 17 00:26:24.884946 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:26:25.043829 containerd[2119]: time="2025-05-17T00:26:25.043770975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:25.079281 containerd[2119]: time="2025-05-17T00:26:25.079166430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:26:25.095629 containerd[2119]: time="2025-05-17T00:26:25.095564373Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:25.098609 containerd[2119]: time="2025-05-17T00:26:25.097819717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:25.103382 containerd[2119]: time="2025-05-17T00:26:25.103310351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 4.254997972s" May 17 00:26:25.103382 containerd[2119]: time="2025-05-17T00:26:25.103370812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:26:25.151489 sshd[6118]: pam_unix(sshd:session): session closed for user core May 17 00:26:25.162766 systemd[1]: sshd@13-172.31.31.125:22-147.75.109.163:40066.service: Deactivated successfully. May 17 00:26:25.167477 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:26:25.168491 systemd-logind[2074]: Session 14 logged out. Waiting for processes to exit. May 17 00:26:25.170219 systemd-logind[2074]: Removed session 14. May 17 00:26:25.278414 containerd[2119]: time="2025-05-17T00:26:25.278379555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:26:25.664000 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:25.662631 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:25.662652 systemd-resolved[1975]: Flushed all caches. May 17 00:26:25.688630 containerd[2119]: time="2025-05-17T00:26:25.687934488Z" level=info msg="CreateContainer within sandbox \"390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:26:25.741598 containerd[2119]: time="2025-05-17T00:26:25.741546434Z" level=info msg="StopPodSandbox for \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\"" May 17 00:26:25.927317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799725681.mount: Deactivated successfully. May 17 00:26:26.030720 containerd[2119]: time="2025-05-17T00:26:26.030561520Z" level=info msg="CreateContainer within sandbox \"390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c304ba8f64f765df225e32b0586b4562fb6f21de1c534b0bf0d476040a8b6191\"" May 17 00:26:26.043122 containerd[2119]: time="2025-05-17T00:26:26.043082503Z" level=info msg="StartContainer for \"c304ba8f64f765df225e32b0586b4562fb6f21de1c534b0bf0d476040a8b6191\"" May 17 00:26:26.564259 containerd[2119]: time="2025-05-17T00:26:26.563324511Z" level=info msg="StartContainer for \"c304ba8f64f765df225e32b0586b4562fb6f21de1c534b0bf0d476040a8b6191\" returns successfully" May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:26.585 [WARNING][6144] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"f87fae28-48af-42f8-92bd-1ecd569fff56", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee", Pod:"goldmane-8f77d7b6c-fm6b4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib7def621fb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:26.595 [INFO][6144] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:26.595 [INFO][6144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" iface="eth0" netns="" May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:26.595 [INFO][6144] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:26.595 [INFO][6144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:27.361 [INFO][6199] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" HandleID="k8s-pod-network.b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:27.363 [INFO][6199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:27.365 [INFO][6199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:27.392 [WARNING][6199] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" HandleID="k8s-pod-network.b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:27.392 [INFO][6199] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" HandleID="k8s-pod-network.b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:27.395 [INFO][6199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:27.412210 containerd[2119]: 2025-05-17 00:26:27.403 [INFO][6144] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:27.413969 containerd[2119]: time="2025-05-17T00:26:27.412572146Z" level=info msg="TearDown network for sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\" successfully" May 17 00:26:27.413969 containerd[2119]: time="2025-05-17T00:26:27.412789225Z" level=info msg="StopPodSandbox for \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\" returns successfully" May 17 00:26:27.714680 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:27.711498 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:27.711550 systemd-resolved[1975]: Flushed all caches. May 17 00:26:27.726955 containerd[2119]: time="2025-05-17T00:26:27.725752741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:27.759229 containerd[2119]: time="2025-05-17T00:26:27.757608222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:26:27.759229 containerd[2119]: time="2025-05-17T00:26:27.758635726Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:27.772910 containerd[2119]: time="2025-05-17T00:26:27.772864694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:27.775960 containerd[2119]: time="2025-05-17T00:26:27.775899344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.497183186s" May 17 00:26:27.777650 containerd[2119]: time="2025-05-17T00:26:27.776313431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:26:27.845704 containerd[2119]: time="2025-05-17T00:26:27.845166283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:26:27.858865 containerd[2119]: time="2025-05-17T00:26:27.858778547Z" level=info msg="CreateContainer within sandbox \"42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:26:27.859065 containerd[2119]: time="2025-05-17T00:26:27.859044524Z" level=info msg="RemovePodSandbox for \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\"" May 17 00:26:27.866872 containerd[2119]: time="2025-05-17T00:26:27.866832282Z" level=info msg="Forcibly stopping sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\"" May 17 00:26:27.898879 containerd[2119]: time="2025-05-17T00:26:27.897839340Z" level=info msg="CreateContainer within sandbox \"42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"71df5e0ea08d2e4c6c00dde9efec12471f21386e75d7f3b190e6bb9cf6b37e2c\"" May 17 00:26:27.935199 containerd[2119]: time="2025-05-17T00:26:27.935143834Z" level=info msg="StartContainer for \"71df5e0ea08d2e4c6c00dde9efec12471f21386e75d7f3b190e6bb9cf6b37e2c\"" May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:27.930 [WARNING][6223] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"f87fae28-48af-42f8-92bd-1ecd569fff56", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"18c7848d074b2a78a2f32ed0ed90617ac7dab9ce5625d467f146be5111b7b3ee", Pod:"goldmane-8f77d7b6c-fm6b4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib7def621fb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:27.930 [INFO][6223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:27.930 [INFO][6223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" iface="eth0" netns="" May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:27.930 [INFO][6223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:27.930 [INFO][6223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:28.004 [INFO][6230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" HandleID="k8s-pod-network.b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:28.004 [INFO][6230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:28.004 [INFO][6230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:28.013 [WARNING][6230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" HandleID="k8s-pod-network.b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:28.013 [INFO][6230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" HandleID="k8s-pod-network.b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" Workload="ip--172--31--31--125-k8s-goldmane--8f77d7b6c--fm6b4-eth0" May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:28.015 [INFO][6230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:28.027610 containerd[2119]: 2025-05-17 00:26:28.021 [INFO][6223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74" May 17 00:26:28.027610 containerd[2119]: time="2025-05-17T00:26:28.025535433Z" level=info msg="TearDown network for sandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\" successfully" May 17 00:26:28.049044 systemd[1]: run-containerd-runc-k8s.io-71df5e0ea08d2e4c6c00dde9efec12471f21386e75d7f3b190e6bb9cf6b37e2c-runc.Sc27SA.mount: Deactivated successfully. May 17 00:26:28.063666 containerd[2119]: time="2025-05-17T00:26:28.063617499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:28.063832 containerd[2119]: time="2025-05-17T00:26:28.063727173Z" level=info msg="RemovePodSandbox \"b70903406436bd0753be13e6ad6870ce01e7b15e88d75864e2598dab1dd8aa74\" returns successfully" May 17 00:26:28.109635 containerd[2119]: time="2025-05-17T00:26:28.109187203Z" level=info msg="StartContainer for \"71df5e0ea08d2e4c6c00dde9efec12471f21386e75d7f3b190e6bb9cf6b37e2c\" returns successfully" May 17 00:26:28.123167 containerd[2119]: time="2025-05-17T00:26:28.123013871Z" level=info msg="StopPodSandbox for \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\"" May 17 00:26:28.239078 containerd[2119]: time="2025-05-17T00:26:28.239031882Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:28.240913 kubelet[3368]: I0517 00:26:28.210935 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hhsvr" podStartSLOduration=29.75480592 podStartE2EDuration="42.205506815s" podCreationTimestamp="2025-05-17 00:25:46 +0000 UTC" firstStartedPulling="2025-05-17 00:26:15.38782092 +0000 UTC m=+50.369547651" lastFinishedPulling="2025-05-17 00:26:27.838521807 +0000 UTC m=+62.820248546" observedRunningTime="2025-05-17 00:26:28.203476093 +0000 UTC m=+63.185202831" watchObservedRunningTime="2025-05-17 00:26:28.205506815 +0000 UTC m=+63.187233553" May 17 00:26:28.247603 kubelet[3368]: I0517 00:26:28.241378 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-db6d855c8-9lxqb" podStartSLOduration=34.164821656 podStartE2EDuration="42.241355961s" podCreationTimestamp="2025-05-17 00:25:46 +0000 UTC" firstStartedPulling="2025-05-17 00:26:17.059377275 +0000 UTC m=+52.041103993" lastFinishedPulling="2025-05-17 00:26:25.135911567 +0000 UTC m=+60.117638298" observedRunningTime="2025-05-17 00:26:27.810882006 +0000 UTC m=+62.792608744" watchObservedRunningTime="2025-05-17 00:26:28.241355961 +0000 UTC m=+63.223082700" May 17 00:26:28.247769 containerd[2119]: time="2025-05-17T00:26:28.243867932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:26:28.249013 containerd[2119]: time="2025-05-17T00:26:28.248930608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 403.712439ms" May 17 00:26:28.249013 containerd[2119]: time="2025-05-17T00:26:28.249007158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:26:28.253300 containerd[2119]: time="2025-05-17T00:26:28.252082763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:26:28.253300 containerd[2119]: time="2025-05-17T00:26:28.252307061Z" level=info msg="CreateContainer within sandbox \"855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:26:28.328391 containerd[2119]: time="2025-05-17T00:26:28.328273560Z" level=info msg="CreateContainer within sandbox \"855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"936887d5a98c3baf823c7478175aa7dc81a304648a8bc80a3c8231011eaf0f11\"" May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.174 [WARNING][6276] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0", GenerateName:"calico-kube-controllers-db6d855c8-", Namespace:"calico-system", SelfLink:"", UID:"739e36e0-8a50-4381-84dc-d3473d61c58e", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"db6d855c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c", Pod:"calico-kube-controllers-db6d855c8-9lxqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibabe576789d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.175 [INFO][6276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.175 [INFO][6276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" iface="eth0" netns="" May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.175 [INFO][6276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.175 [INFO][6276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.276 [INFO][6283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" HandleID="k8s-pod-network.4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.277 [INFO][6283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.277 [INFO][6283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.306 [WARNING][6283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" HandleID="k8s-pod-network.4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.306 [INFO][6283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" HandleID="k8s-pod-network.4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.314 [INFO][6283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:28.339211 containerd[2119]: 2025-05-17 00:26:28.333 [INFO][6276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:28.343548 containerd[2119]: time="2025-05-17T00:26:28.339930002Z" level=info msg="TearDown network for sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\" successfully" May 17 00:26:28.343548 containerd[2119]: time="2025-05-17T00:26:28.339965221Z" level=info msg="StopPodSandbox for \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\" returns successfully" May 17 00:26:28.359822 containerd[2119]: time="2025-05-17T00:26:28.359790002Z" level=info msg="StartContainer for \"936887d5a98c3baf823c7478175aa7dc81a304648a8bc80a3c8231011eaf0f11\"" May 17 00:26:28.360776 containerd[2119]: time="2025-05-17T00:26:28.360249086Z" level=info msg="RemovePodSandbox for \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\"" May 17 00:26:28.360776 containerd[2119]: time="2025-05-17T00:26:28.360472464Z" level=info msg="Forcibly stopping sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\"" May 17 00:26:28.490620 containerd[2119]: time="2025-05-17T00:26:28.490419888Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:28.496364 containerd[2119]: time="2025-05-17T00:26:28.496151065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:28.496364 containerd[2119]: time="2025-05-17T00:26:28.496199321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:26:28.510833 kubelet[3368]: E0517 00:26:28.510365 3368 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:28.517645 kubelet[3368]: E0517 00:26:28.517544 3368 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.452 [WARNING][6322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0", GenerateName:"calico-kube-controllers-db6d855c8-", Namespace:"calico-system", SelfLink:"", UID:"739e36e0-8a50-4381-84dc-d3473d61c58e", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"db6d855c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"390efb28305abdc78cfbe48cda526fe9c32c3d4e018853266021644d603fcb5c", Pod:"calico-kube-controllers-db6d855c8-9lxqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibabe576789d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.453 [INFO][6322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.453 [INFO][6322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" iface="eth0" netns="" May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.453 [INFO][6322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.454 [INFO][6322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.527 [INFO][6352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" HandleID="k8s-pod-network.4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.527 [INFO][6352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.527 [INFO][6352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.539 [WARNING][6352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" HandleID="k8s-pod-network.4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.539 [INFO][6352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" HandleID="k8s-pod-network.4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" Workload="ip--172--31--31--125-k8s-calico--kube--controllers--db6d855c8--9lxqb-eth0" May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.544 [INFO][6352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:28.557952 containerd[2119]: 2025-05-17 00:26:28.548 [INFO][6322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8" May 17 00:26:28.563883 containerd[2119]: time="2025-05-17T00:26:28.558135495Z" level=info msg="TearDown network for sandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\" successfully" May 17 00:26:28.563883 containerd[2119]: time="2025-05-17T00:26:28.559815595Z" level=info msg="StartContainer for \"936887d5a98c3baf823c7478175aa7dc81a304648a8bc80a3c8231011eaf0f11\" returns successfully" May 17 00:26:28.566719 kubelet[3368]: E0517 00:26:28.566224 3368 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2817da406ebd46ae80a13be25f9034c9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4qzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc77d4b98-pwrxt_calico-system(bff9171d-67d4-4c78-9fc8-257a4f17dd49): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:28.572423 containerd[2119]: time="2025-05-17T00:26:28.571538383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:26:28.588091 kubelet[3368]: I0517 00:26:28.579979 3368 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:26:28.592790 kubelet[3368]: I0517 00:26:28.592642 3368 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:26:28.594460 containerd[2119]: time="2025-05-17T00:26:28.593741959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:28.594460 containerd[2119]: time="2025-05-17T00:26:28.593823915Z" level=info msg="RemovePodSandbox \"4725a91b4a9c89a1310250b99e494e2372579bf9a2ea11d19aa98943ca4afab8\" returns successfully" May 17 00:26:28.595359 containerd[2119]: time="2025-05-17T00:26:28.595326976Z" level=info msg="StopPodSandbox for \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\"" May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.684 [WARNING][6373] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9548793e-04a2-4303-8663-86deb887e61f", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b", Pod:"csi-node-driver-hhsvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5776d90a4bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.685 [INFO][6373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.685 [INFO][6373] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" iface="eth0" netns="" May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.685 [INFO][6373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.685 [INFO][6373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.714 [INFO][6384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" HandleID="k8s-pod-network.86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.715 [INFO][6384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.715 [INFO][6384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.721 [WARNING][6384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" HandleID="k8s-pod-network.86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.722 [INFO][6384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" HandleID="k8s-pod-network.86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.723 [INFO][6384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:28.729660 containerd[2119]: 2025-05-17 00:26:28.725 [INFO][6373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:28.730981 containerd[2119]: time="2025-05-17T00:26:28.729717049Z" level=info msg="TearDown network for sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\" successfully" May 17 00:26:28.730981 containerd[2119]: time="2025-05-17T00:26:28.729738617Z" level=info msg="StopPodSandbox for \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\" returns successfully" May 17 00:26:28.739952 containerd[2119]: time="2025-05-17T00:26:28.739908175Z" level=info msg="RemovePodSandbox for \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\"" May 17 00:26:28.739952 containerd[2119]: time="2025-05-17T00:26:28.739948332Z" level=info msg="Forcibly stopping sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\"" May 17 00:26:28.765214 containerd[2119]: time="2025-05-17T00:26:28.765142616Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:28.770249 containerd[2119]: time="2025-05-17T00:26:28.770020840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:28.770249 containerd[2119]: time="2025-05-17T00:26:28.770070294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:26:28.772098 kubelet[3368]: E0517 00:26:28.772049 3368 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:28.772309 kubelet[3368]: E0517 00:26:28.772288 3368 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:28.772805 kubelet[3368]: E0517 00:26:28.772529 3368 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4qzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc77d4b98-pwrxt_calico-system(bff9171d-67d4-4c78-9fc8-257a4f17dd49): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:28.779816 kubelet[3368]: E0517 00:26:28.779720 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6fc77d4b98-pwrxt" podUID="bff9171d-67d4-4c78-9fc8-257a4f17dd49" May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.790 [WARNING][6399] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9548793e-04a2-4303-8663-86deb887e61f", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"42f2733cfbf7242e0f8ca3df84c800fffdc5c49e1e2364b7744a2982048caa9b", Pod:"csi-node-driver-hhsvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5776d90a4bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.790 [INFO][6399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.790 [INFO][6399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" iface="eth0" netns="" May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.790 [INFO][6399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.790 [INFO][6399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.838 [INFO][6406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" HandleID="k8s-pod-network.86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.838 [INFO][6406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.838 [INFO][6406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.847 [WARNING][6406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" HandleID="k8s-pod-network.86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.847 [INFO][6406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" HandleID="k8s-pod-network.86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" Workload="ip--172--31--31--125-k8s-csi--node--driver--hhsvr-eth0" May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.850 [INFO][6406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:28.854745 containerd[2119]: 2025-05-17 00:26:28.852 [INFO][6399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52" May 17 00:26:28.854745 containerd[2119]: time="2025-05-17T00:26:28.854247715Z" level=info msg="TearDown network for sandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\" successfully" May 17 00:26:28.860312 containerd[2119]: time="2025-05-17T00:26:28.860260492Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:28.860427 containerd[2119]: time="2025-05-17T00:26:28.860345128Z" level=info msg="RemovePodSandbox \"86a839b094e10ef26dbba7a1c8abdcaa7d262b80169c0159b4b5746169676d52\" returns successfully" May 17 00:26:28.860943 containerd[2119]: time="2025-05-17T00:26:28.860913702Z" level=info msg="StopPodSandbox for \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\"" May 17 00:26:28.891168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078298440.mount: Deactivated successfully. May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:28.943 [WARNING][6422] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" WorkloadEndpoint="ip--172--31--31--125-k8s-whisker--7c9fd756b4--hwshk-eth0" May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:28.944 [INFO][6422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:28.944 [INFO][6422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" iface="eth0" netns="" May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:28.945 [INFO][6422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:28.945 [INFO][6422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:29.026 [INFO][6433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" HandleID="k8s-pod-network.d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" Workload="ip--172--31--31--125-k8s-whisker--7c9fd756b4--hwshk-eth0" May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:29.026 [INFO][6433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:29.026 [INFO][6433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:29.035 [WARNING][6433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" HandleID="k8s-pod-network.d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" Workload="ip--172--31--31--125-k8s-whisker--7c9fd756b4--hwshk-eth0" May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:29.036 [INFO][6433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" HandleID="k8s-pod-network.d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" Workload="ip--172--31--31--125-k8s-whisker--7c9fd756b4--hwshk-eth0" May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:29.037 [INFO][6433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:29.043231 containerd[2119]: 2025-05-17 00:26:29.040 [INFO][6422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:29.057043 containerd[2119]: time="2025-05-17T00:26:29.056969009Z" level=info msg="TearDown network for sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\" successfully" May 17 00:26:29.057333 containerd[2119]: time="2025-05-17T00:26:29.057310565Z" level=info msg="StopPodSandbox for \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\" returns successfully" May 17 00:26:29.078858 containerd[2119]: time="2025-05-17T00:26:29.078825300Z" level=info msg="RemovePodSandbox for \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\"" May 17 00:26:29.079355 containerd[2119]: time="2025-05-17T00:26:29.079336206Z" level=info msg="Forcibly stopping sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\"" May 17 00:26:29.205029 containerd[2119]: time="2025-05-17T00:26:29.204878837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.139 [WARNING][6448] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" WorkloadEndpoint="ip--172--31--31--125-k8s-whisker--7c9fd756b4--hwshk-eth0" May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.140 [INFO][6448] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.140 [INFO][6448] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" iface="eth0" netns="" May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.140 [INFO][6448] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.140 [INFO][6448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.191 [INFO][6455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" HandleID="k8s-pod-network.d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" Workload="ip--172--31--31--125-k8s-whisker--7c9fd756b4--hwshk-eth0" May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.193 [INFO][6455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.193 [INFO][6455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.209 [WARNING][6455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" HandleID="k8s-pod-network.d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" Workload="ip--172--31--31--125-k8s-whisker--7c9fd756b4--hwshk-eth0" May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.209 [INFO][6455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" HandleID="k8s-pod-network.d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" Workload="ip--172--31--31--125-k8s-whisker--7c9fd756b4--hwshk-eth0" May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.217 [INFO][6455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:29.231660 containerd[2119]: 2025-05-17 00:26:29.221 [INFO][6448] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832" May 17 00:26:29.234772 containerd[2119]: time="2025-05-17T00:26:29.234698045Z" level=info msg="TearDown network for sandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\" successfully" May 17 00:26:29.246476 containerd[2119]: time="2025-05-17T00:26:29.246211482Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:29.246476 containerd[2119]: time="2025-05-17T00:26:29.246286801Z" level=info msg="RemovePodSandbox \"d064ca98c14762ba381e6b79012fb01f6f5acdb21ceb96ce098a09d2fdca1832\" returns successfully" May 17 00:26:29.248300 containerd[2119]: time="2025-05-17T00:26:29.248021987Z" level=info msg="StopPodSandbox for \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\"" May 17 00:26:29.272955 kubelet[3368]: I0517 00:26:29.271912 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58fb97568c-2dtmf" podStartSLOduration=37.748276719 podStartE2EDuration="48.271574449s" podCreationTimestamp="2025-05-17 00:25:41 +0000 UTC" firstStartedPulling="2025-05-17 00:26:17.726735575 +0000 UTC m=+52.708462295" lastFinishedPulling="2025-05-17 00:26:28.250033295 +0000 UTC m=+63.231760025" observedRunningTime="2025-05-17 00:26:29.271307739 +0000 UTC m=+64.253034477" watchObservedRunningTime="2025-05-17 00:26:29.271574449 +0000 UTC m=+64.253301189" May 17 00:26:29.446298 containerd[2119]: time="2025-05-17T00:26:29.446238175Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:29.452182 containerd[2119]: time="2025-05-17T00:26:29.452117470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:29.452476 containerd[2119]: time="2025-05-17T00:26:29.452347017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:26:29.454036 kubelet[3368]: E0517 00:26:29.453922 3368 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:29.454758 kubelet[3368]: E0517 00:26:29.454475 3368 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:29.490665 kubelet[3368]: E0517 00:26:29.488971 3368 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d9ntn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-fm6b4_calico-system(f87fae28-48af-42f8-92bd-1ecd569fff56): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:29.492346 kubelet[3368]: E0517 00:26:29.491221 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-fm6b4" podUID="f87fae28-48af-42f8-92bd-1ecd569fff56" May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.428 [WARNING][6469] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c4fe4f39-7918-4903-9c97-2e02a23b49cc", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c", Pod:"coredns-7c65d6cfc9-m5q6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia92f5acd775", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.429 [INFO][6469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.429 [INFO][6469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" iface="eth0" netns="" May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.430 [INFO][6469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.430 [INFO][6469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.494 [INFO][6478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" HandleID="k8s-pod-network.9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.494 [INFO][6478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.495 [INFO][6478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.503 [WARNING][6478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" HandleID="k8s-pod-network.9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.503 [INFO][6478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" HandleID="k8s-pod-network.9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.505 [INFO][6478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:29.510657 containerd[2119]: 2025-05-17 00:26:29.507 [INFO][6469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:29.511402 containerd[2119]: time="2025-05-17T00:26:29.510724780Z" level=info msg="TearDown network for sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\" successfully" May 17 00:26:29.511402 containerd[2119]: time="2025-05-17T00:26:29.510756507Z" level=info msg="StopPodSandbox for \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\" returns successfully" May 17 00:26:29.512841 containerd[2119]: time="2025-05-17T00:26:29.512805562Z" level=info msg="RemovePodSandbox for \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\"" May 17 00:26:29.512963 containerd[2119]: time="2025-05-17T00:26:29.512847906Z" level=info msg="Forcibly stopping sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\"" May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.566 [WARNING][6493] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c4fe4f39-7918-4903-9c97-2e02a23b49cc", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"978c9f04bcf870571067cac046335fa2fbb7ce4c26b212cbcbbda6860cf35e4c", Pod:"coredns-7c65d6cfc9-m5q6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia92f5acd775", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.566 [INFO][6493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.566 [INFO][6493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" iface="eth0" netns="" May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.566 [INFO][6493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.566 [INFO][6493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.605 [INFO][6500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" HandleID="k8s-pod-network.9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.605 [INFO][6500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.605 [INFO][6500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.612 [WARNING][6500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" HandleID="k8s-pod-network.9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.613 [INFO][6500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" HandleID="k8s-pod-network.9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--m5q6w-eth0" May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.614 [INFO][6500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:29.619973 containerd[2119]: 2025-05-17 00:26:29.617 [INFO][6493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2" May 17 00:26:29.619973 containerd[2119]: time="2025-05-17T00:26:29.619889088Z" level=info msg="TearDown network for sandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\" successfully" May 17 00:26:29.662714 containerd[2119]: time="2025-05-17T00:26:29.662665737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:29.662943 containerd[2119]: time="2025-05-17T00:26:29.662924014Z" level=info msg="RemovePodSandbox \"9a07939c87437118a579983905eed847f9e484c9e45fc6dd91dc8f979e4325e2\" returns successfully" May 17 00:26:29.663637 containerd[2119]: time="2025-05-17T00:26:29.663611849Z" level=info msg="StopPodSandbox for \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\"" May 17 00:26:29.756797 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:29.758959 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:29.756820 systemd-resolved[1975]: Flushed all caches. May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.735 [WARNING][6514] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0", GenerateName:"calico-apiserver-58fb97568c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7e5c708-6f1e-4a6a-8224-1c84baaaea1e", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58fb97568c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85", Pod:"calico-apiserver-58fb97568c-2dtmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali803bd4c7225", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.736 [INFO][6514] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.736 [INFO][6514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" iface="eth0" netns="" May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.736 [INFO][6514] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.736 [INFO][6514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.772 [INFO][6521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" HandleID="k8s-pod-network.4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.773 [INFO][6521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.773 [INFO][6521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.789 [WARNING][6521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" HandleID="k8s-pod-network.4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.789 [INFO][6521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" HandleID="k8s-pod-network.4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.792 [INFO][6521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:29.798004 containerd[2119]: 2025-05-17 00:26:29.794 [INFO][6514] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:29.799244 containerd[2119]: time="2025-05-17T00:26:29.798045935Z" level=info msg="TearDown network for sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\" successfully" May 17 00:26:29.799244 containerd[2119]: time="2025-05-17T00:26:29.798076430Z" level=info msg="StopPodSandbox for \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\" returns successfully" May 17 00:26:29.830737 containerd[2119]: time="2025-05-17T00:26:29.830698202Z" level=info msg="RemovePodSandbox for \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\"" May 17 00:26:29.830899 containerd[2119]: time="2025-05-17T00:26:29.830761339Z" level=info msg="Forcibly stopping sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\"" May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.883 [WARNING][6535] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0", GenerateName:"calico-apiserver-58fb97568c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7e5c708-6f1e-4a6a-8224-1c84baaaea1e", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58fb97568c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"855b60e6842c1b6285c10415b5ebc85df12ea8a64b2d7733b4ec5f1ae8f60a85", Pod:"calico-apiserver-58fb97568c-2dtmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali803bd4c7225", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.883 [INFO][6535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.883 [INFO][6535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" iface="eth0" netns="" May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.883 [INFO][6535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.883 [INFO][6535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.917 [INFO][6542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" HandleID="k8s-pod-network.4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.917 [INFO][6542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.917 [INFO][6542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.924 [WARNING][6542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" HandleID="k8s-pod-network.4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.924 [INFO][6542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" HandleID="k8s-pod-network.4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--2dtmf-eth0" May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.926 [INFO][6542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:29.930319 containerd[2119]: 2025-05-17 00:26:29.928 [INFO][6535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539" May 17 00:26:29.932230 containerd[2119]: time="2025-05-17T00:26:29.930377290Z" level=info msg="TearDown network for sandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\" successfully" May 17 00:26:29.939081 containerd[2119]: time="2025-05-17T00:26:29.938722783Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:29.939081 containerd[2119]: time="2025-05-17T00:26:29.938807733Z" level=info msg="RemovePodSandbox \"4751719b8b44233b2b0a8e89e520eb7f4ff774a55e9358e38879b919112a6539\" returns successfully" May 17 00:26:29.939926 containerd[2119]: time="2025-05-17T00:26:29.939892959Z" level=info msg="StopPodSandbox for \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\"" May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.023 [WARNING][6556] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"689d1667-b089-4fd0-8ef7-000242998aaf", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c", Pod:"coredns-7c65d6cfc9-ljd7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80657e86d5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.024 [INFO][6556] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.024 [INFO][6556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" iface="eth0" netns="" May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.024 [INFO][6556] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.024 [INFO][6556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.115 [INFO][6563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" HandleID="k8s-pod-network.92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.115 [INFO][6563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.115 [INFO][6563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.131 [WARNING][6563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" HandleID="k8s-pod-network.92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.131 [INFO][6563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" HandleID="k8s-pod-network.92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.133 [INFO][6563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:30.139328 containerd[2119]: 2025-05-17 00:26:30.136 [INFO][6556] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:30.142724 containerd[2119]: time="2025-05-17T00:26:30.139283389Z" level=info msg="TearDown network for sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\" successfully" May 17 00:26:30.142724 containerd[2119]: time="2025-05-17T00:26:30.139444935Z" level=info msg="StopPodSandbox for \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\" returns successfully" May 17 00:26:30.142724 containerd[2119]: time="2025-05-17T00:26:30.141297210Z" level=info msg="RemovePodSandbox for \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\"" May 17 00:26:30.142724 containerd[2119]: time="2025-05-17T00:26:30.141332086Z" level=info msg="Forcibly stopping sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\"" May 17 00:26:30.193311 systemd[1]: Started sshd@14-172.31.31.125:22-147.75.109.163:44236.service - OpenSSH per-connection server daemon (147.75.109.163:44236). May 17 00:26:30.435699 kubelet[3368]: I0517 00:26:30.434970 3368 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.278 [WARNING][6578] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"689d1667-b089-4fd0-8ef7-000242998aaf", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"d1e91c5373072f64c8b1fcb8d7cbf229029ec27c879fb3e9b61893d07fca8a0c", Pod:"coredns-7c65d6cfc9-ljd7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80657e86d5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.285 [INFO][6578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.285 [INFO][6578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" iface="eth0" netns="" May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.285 [INFO][6578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.285 [INFO][6578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.463 [INFO][6586] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" HandleID="k8s-pod-network.92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.464 [INFO][6586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.464 [INFO][6586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.502 [WARNING][6586] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" HandleID="k8s-pod-network.92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.502 [INFO][6586] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" HandleID="k8s-pod-network.92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" Workload="ip--172--31--31--125-k8s-coredns--7c65d6cfc9--ljd7m-eth0" May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.507 [INFO][6586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:30.557936 containerd[2119]: 2025-05-17 00:26:30.530 [INFO][6578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5" May 17 00:26:30.557936 containerd[2119]: time="2025-05-17T00:26:30.547861687Z" level=info msg="TearDown network for sandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\" successfully" May 17 00:26:30.563954 containerd[2119]: time="2025-05-17T00:26:30.558409821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:30.563954 containerd[2119]: time="2025-05-17T00:26:30.558528311Z" level=info msg="RemovePodSandbox \"92e35996d121e799c3178dc3ecbda83e5f2ba2e0403979da91ae75571b57a5b5\" returns successfully" May 17 00:26:30.587478 containerd[2119]: time="2025-05-17T00:26:30.586807132Z" level=info msg="StopPodSandbox for \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\"" May 17 00:26:30.759903 sshd[6582]: Accepted publickey for core from 147.75.109.163 port 44236 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:30.774930 sshd[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:30.819779 systemd[1]: run-containerd-runc-k8s.io-c304ba8f64f765df225e32b0586b4562fb6f21de1c534b0bf0d476040a8b6191-runc.68hxAp.mount: Deactivated successfully. May 17 00:26:30.856494 systemd-logind[2074]: New session 15 of user core. May 17 00:26:30.862217 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.752 [WARNING][6602] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0", GenerateName:"calico-apiserver-58fb97568c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e097cbd3-9914-4403-a492-af7b73e56564", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58fb97568c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba", Pod:"calico-apiserver-58fb97568c-9q2hm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7087658358d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.761 [INFO][6602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.762 [INFO][6602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" iface="eth0" netns="" May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.762 [INFO][6602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.763 [INFO][6602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.880 [INFO][6615] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" HandleID="k8s-pod-network.b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.880 [INFO][6615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.880 [INFO][6615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.907 [WARNING][6615] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" HandleID="k8s-pod-network.b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.907 [INFO][6615] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" HandleID="k8s-pod-network.b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.914 [INFO][6615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:30.922794 containerd[2119]: 2025-05-17 00:26:30.920 [INFO][6602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:30.924987 containerd[2119]: time="2025-05-17T00:26:30.923208517Z" level=info msg="TearDown network for sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\" successfully" May 17 00:26:30.924987 containerd[2119]: time="2025-05-17T00:26:30.923231454Z" level=info msg="StopPodSandbox for \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\" returns successfully" May 17 00:26:30.925326 containerd[2119]: time="2025-05-17T00:26:30.925305165Z" level=info msg="RemovePodSandbox for \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\"" May 17 00:26:30.925663 containerd[2119]: time="2025-05-17T00:26:30.925646901Z" level=info msg="Forcibly stopping sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\"" May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:30.962 [WARNING][6645] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0", GenerateName:"calico-apiserver-58fb97568c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e097cbd3-9914-4403-a492-af7b73e56564", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58fb97568c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-125", ContainerID:"724faaf140462e2a3aedee0e2f4948df670f13129a153bb7dcc162323bac02ba", Pod:"calico-apiserver-58fb97568c-9q2hm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7087658358d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:30.962 [INFO][6645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:30.962 [INFO][6645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" iface="eth0" netns="" May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:30.962 [INFO][6645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:30.962 [INFO][6645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:31.001 [INFO][6652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" HandleID="k8s-pod-network.b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:31.001 [INFO][6652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:31.001 [INFO][6652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:31.022 [WARNING][6652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" HandleID="k8s-pod-network.b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:31.022 [INFO][6652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" HandleID="k8s-pod-network.b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" Workload="ip--172--31--31--125-k8s-calico--apiserver--58fb97568c--9q2hm-eth0" May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:31.027 [INFO][6652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:31.041698 containerd[2119]: 2025-05-17 00:26:31.037 [INFO][6645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0" May 17 00:26:31.044692 containerd[2119]: time="2025-05-17T00:26:31.042843220Z" level=info msg="TearDown network for sandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\" successfully" May 17 00:26:31.053734 containerd[2119]: time="2025-05-17T00:26:31.053560193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:31.053734 containerd[2119]: time="2025-05-17T00:26:31.053646534Z" level=info msg="RemovePodSandbox \"b6d4d13d6a276c279136506566380096f1067fe23c90214f48d03a59cb9952b0\" returns successfully" May 17 00:26:31.805672 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:31.807848 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:31.805705 systemd-resolved[1975]: Flushed all caches. May 17 00:26:31.947838 sshd[6582]: pam_unix(sshd:session): session closed for user core May 17 00:26:31.951618 systemd[1]: sshd@14-172.31.31.125:22-147.75.109.163:44236.service: Deactivated successfully. May 17 00:26:31.956674 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:26:31.957801 systemd-logind[2074]: Session 15 logged out. Waiting for processes to exit. May 17 00:26:31.959417 systemd-logind[2074]: Removed session 15. May 17 00:26:36.976506 systemd[1]: Started sshd@15-172.31.31.125:22-147.75.109.163:44244.service - OpenSSH per-connection server daemon (147.75.109.163:44244). May 17 00:26:37.160645 sshd[6677]: Accepted publickey for core from 147.75.109.163 port 44244 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:37.165176 sshd[6677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:37.175793 systemd-logind[2074]: New session 16 of user core. May 17 00:26:37.180769 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:26:37.628685 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:37.628693 systemd-resolved[1975]: Flushed all caches. May 17 00:26:37.630698 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:37.709958 sshd[6677]: pam_unix(sshd:session): session closed for user core May 17 00:26:37.716799 systemd[1]: sshd@15-172.31.31.125:22-147.75.109.163:44244.service: Deactivated successfully. May 17 00:26:37.721955 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:26:37.722184 systemd-logind[2074]: Session 16 logged out. Waiting for processes to exit. May 17 00:26:37.724391 systemd-logind[2074]: Removed session 16. May 17 00:26:37.736934 systemd[1]: Started sshd@16-172.31.31.125:22-147.75.109.163:44250.service - OpenSSH per-connection server daemon (147.75.109.163:44250). May 17 00:26:37.892498 sshd[6693]: Accepted publickey for core from 147.75.109.163 port 44250 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:37.895031 sshd[6693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:37.901193 systemd-logind[2074]: New session 17 of user core. May 17 00:26:37.907232 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:26:38.576537 sshd[6693]: pam_unix(sshd:session): session closed for user core May 17 00:26:38.587236 systemd[1]: sshd@16-172.31.31.125:22-147.75.109.163:44250.service: Deactivated successfully. May 17 00:26:38.594674 systemd-logind[2074]: Session 17 logged out. Waiting for processes to exit. May 17 00:26:38.605347 systemd[1]: Started sshd@17-172.31.31.125:22-147.75.109.163:60906.service - OpenSSH per-connection server daemon (147.75.109.163:60906). May 17 00:26:38.606171 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:26:38.608251 systemd-logind[2074]: Removed session 17. May 17 00:26:38.783620 sshd[6704]: Accepted publickey for core from 147.75.109.163 port 60906 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:38.787368 sshd[6704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:38.792485 systemd-logind[2074]: New session 18 of user core. May 17 00:26:38.796046 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:26:41.599714 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:41.597677 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:41.597708 systemd-resolved[1975]: Flushed all caches. May 17 00:26:41.838224 sshd[6704]: pam_unix(sshd:session): session closed for user core May 17 00:26:41.857987 systemd[1]: Started sshd@18-172.31.31.125:22-147.75.109.163:60908.service - OpenSSH per-connection server daemon (147.75.109.163:60908). May 17 00:26:41.858660 systemd[1]: sshd@17-172.31.31.125:22-147.75.109.163:60906.service: Deactivated successfully. May 17 00:26:41.871738 systemd-logind[2074]: Session 18 logged out. Waiting for processes to exit. May 17 00:26:41.873683 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:26:41.878571 systemd-logind[2074]: Removed session 18. May 17 00:26:42.107934 sshd[6724]: Accepted publickey for core from 147.75.109.163 port 60908 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:42.110770 sshd[6724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:42.116794 systemd-logind[2074]: New session 19 of user core. May 17 00:26:42.122955 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:26:43.431969 sshd[6724]: pam_unix(sshd:session): session closed for user core May 17 00:26:43.444098 systemd[1]: sshd@18-172.31.31.125:22-147.75.109.163:60908.service: Deactivated successfully. May 17 00:26:43.467728 systemd-logind[2074]: Session 19 logged out. Waiting for processes to exit. May 17 00:26:43.479952 systemd[1]: Started sshd@19-172.31.31.125:22-147.75.109.163:60910.service - OpenSSH per-connection server daemon (147.75.109.163:60910). May 17 00:26:43.481238 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:26:43.492443 systemd-logind[2074]: Removed session 19. May 17 00:26:43.553921 kubelet[3368]: E0517 00:26:43.549864 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6fc77d4b98-pwrxt" podUID="bff9171d-67d4-4c78-9fc8-257a4f17dd49" May 17 00:26:43.553921 kubelet[3368]: E0517 00:26:43.553667 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-fm6b4" podUID="f87fae28-48af-42f8-92bd-1ecd569fff56" May 17 00:26:43.646069 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:43.646990 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:43.646087 systemd-resolved[1975]: Flushed all caches. May 17 00:26:43.690282 sshd[6741]: Accepted publickey for core from 147.75.109.163 port 60910 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:43.691822 sshd[6741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:43.696924 systemd-logind[2074]: New session 20 of user core. May 17 00:26:43.701942 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:26:43.953400 sshd[6741]: pam_unix(sshd:session): session closed for user core May 17 00:26:43.959739 systemd[1]: sshd@19-172.31.31.125:22-147.75.109.163:60910.service: Deactivated successfully. May 17 00:26:43.966393 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:26:43.977467 systemd-logind[2074]: Session 20 logged out. Waiting for processes to exit. May 17 00:26:43.986962 systemd-logind[2074]: Removed session 20. May 17 00:26:48.983338 systemd[1]: Started sshd@20-172.31.31.125:22-147.75.109.163:38850.service - OpenSSH per-connection server daemon (147.75.109.163:38850). May 17 00:26:49.268898 sshd[6758]: Accepted publickey for core from 147.75.109.163 port 38850 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:49.272753 sshd[6758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:49.288406 systemd-logind[2074]: New session 21 of user core. May 17 00:26:49.293094 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:26:49.604245 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:49.596901 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:49.596932 systemd-resolved[1975]: Flushed all caches. May 17 00:26:49.680977 sshd[6758]: pam_unix(sshd:session): session closed for user core May 17 00:26:49.698334 systemd[1]: sshd@20-172.31.31.125:22-147.75.109.163:38850.service: Deactivated successfully. May 17 00:26:49.711651 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:26:49.716804 systemd-logind[2074]: Session 21 logged out. Waiting for processes to exit. May 17 00:26:49.725667 systemd-logind[2074]: Removed session 21. May 17 00:26:50.202661 systemd[1]: run-containerd-runc-k8s.io-c304ba8f64f765df225e32b0586b4562fb6f21de1c534b0bf0d476040a8b6191-runc.skokKd.mount: Deactivated successfully. May 17 00:26:51.813405 systemd[1]: run-containerd-runc-k8s.io-eee30c2c60c12bdac4be702c411f1aa3b775fe4826dc72af95040f2dcb129ff6-runc.ZMHhAv.mount: Deactivated successfully. May 17 00:26:54.719344 systemd[1]: Started sshd@21-172.31.31.125:22-147.75.109.163:38862.service - OpenSSH per-connection server daemon (147.75.109.163:38862). May 17 00:26:54.994138 sshd[6822]: Accepted publickey for core from 147.75.109.163 port 38862 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:55.000162 sshd[6822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:55.013142 systemd-logind[2074]: New session 22 of user core. May 17 00:26:55.020276 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:26:55.621060 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:55.615655 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:55.615704 systemd-resolved[1975]: Flushed all caches. May 17 00:26:55.859540 sshd[6822]: pam_unix(sshd:session): session closed for user core May 17 00:26:55.873091 systemd[1]: sshd@21-172.31.31.125:22-147.75.109.163:38862.service: Deactivated successfully. May 17 00:26:55.883169 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:26:55.883512 systemd-logind[2074]: Session 22 logged out. Waiting for processes to exit. May 17 00:26:55.886446 systemd-logind[2074]: Removed session 22. May 17 00:26:55.943263 kubelet[3368]: I0517 00:26:55.887562 3368 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:26:57.263487 containerd[2119]: time="2025-05-17T00:26:57.242313532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:26:57.614119 containerd[2119]: time="2025-05-17T00:26:57.614059593Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:57.616334 containerd[2119]: time="2025-05-17T00:26:57.616289441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:57.617776 containerd[2119]: time="2025-05-17T00:26:57.616762957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:26:57.631812 kubelet[3368]: E0517 00:26:57.631753 3368 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:57.638579 kubelet[3368]: E0517 00:26:57.631833 3368 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:57.661158 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:26:57.663328 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:26:57.661179 systemd-resolved[1975]: Flushed all caches. May 17 00:26:57.713755 kubelet[3368]: E0517 00:26:57.713669 3368 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d9ntn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-fm6b4_calico-system(f87fae28-48af-42f8-92bd-1ecd569fff56): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:57.718263 kubelet[3368]: E0517 00:26:57.718019 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-fm6b4" podUID="f87fae28-48af-42f8-92bd-1ecd569fff56" May 17 00:26:58.183499 containerd[2119]: time="2025-05-17T00:26:58.183453546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:26:58.392801 containerd[2119]: time="2025-05-17T00:26:58.392742106Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:58.395716 containerd[2119]: time="2025-05-17T00:26:58.395658191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:58.395936 containerd[2119]: time="2025-05-17T00:26:58.395695027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:26:58.395989 kubelet[3368]: E0517 00:26:58.395916 3368 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:58.395989 kubelet[3368]: E0517 00:26:58.395972 3368 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:58.396139 kubelet[3368]: E0517 00:26:58.396093 3368 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2817da406ebd46ae80a13be25f9034c9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4qzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc77d4b98-pwrxt_calico-system(bff9171d-67d4-4c78-9fc8-257a4f17dd49): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:58.400620 containerd[2119]: time="2025-05-17T00:26:58.400314367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:26:58.589825 containerd[2119]: time="2025-05-17T00:26:58.589751294Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:58.592186 containerd[2119]: time="2025-05-17T00:26:58.592028178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:58.592186 containerd[2119]: time="2025-05-17T00:26:58.592122286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:26:58.592403 kubelet[3368]: E0517 00:26:58.592305 3368 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:58.592403 kubelet[3368]: E0517 00:26:58.592365 3368 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:58.593138 kubelet[3368]: E0517 00:26:58.592500 3368 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4qzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc77d4b98-pwrxt_calico-system(bff9171d-67d4-4c78-9fc8-257a4f17dd49): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:58.596748 kubelet[3368]: E0517 00:26:58.596634 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6fc77d4b98-pwrxt" podUID="bff9171d-67d4-4c78-9fc8-257a4f17dd49" May 17 00:27:00.706558 systemd[1]: run-containerd-runc-k8s.io-c304ba8f64f765df225e32b0586b4562fb6f21de1c534b0bf0d476040a8b6191-runc.oGK753.mount: Deactivated successfully. May 17 00:27:00.893104 systemd[1]: Started sshd@22-172.31.31.125:22-147.75.109.163:34334.service - OpenSSH per-connection server daemon (147.75.109.163:34334). May 17 00:27:01.140094 sshd[6863]: Accepted publickey for core from 147.75.109.163 port 34334 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:27:01.144201 sshd[6863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:27:01.151467 systemd-logind[2074]: New session 23 of user core. May 17 00:27:01.154468 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:27:01.631075 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:27:01.629194 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:27:01.629240 systemd-resolved[1975]: Flushed all caches. May 17 00:27:02.837377 sshd[6863]: pam_unix(sshd:session): session closed for user core May 17 00:27:02.846479 systemd[1]: sshd@22-172.31.31.125:22-147.75.109.163:34334.service: Deactivated successfully. May 17 00:27:02.851225 systemd-logind[2074]: Session 23 logged out. Waiting for processes to exit. May 17 00:27:02.851919 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:27:02.856080 systemd-logind[2074]: Removed session 23. May 17 00:27:03.676725 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:27:03.676734 systemd-resolved[1975]: Flushed all caches. May 17 00:27:03.681888 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:27:07.868387 systemd[1]: Started sshd@23-172.31.31.125:22-147.75.109.163:34336.service - OpenSSH per-connection server daemon (147.75.109.163:34336). May 17 00:27:08.111489 sshd[6879]: Accepted publickey for core from 147.75.109.163 port 34336 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:27:08.119816 sshd[6879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:27:08.133954 systemd-logind[2074]: New session 24 of user core. May 17 00:27:08.142261 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:27:08.976968 sshd[6879]: pam_unix(sshd:session): session closed for user core May 17 00:27:08.982012 systemd[1]: sshd@23-172.31.31.125:22-147.75.109.163:34336.service: Deactivated successfully. May 17 00:27:08.993000 systemd-logind[2074]: Session 24 logged out. Waiting for processes to exit. May 17 00:27:08.993794 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:27:08.996535 systemd-logind[2074]: Removed session 24. May 17 00:27:09.628689 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:27:09.628717 systemd-resolved[1975]: Flushed all caches. May 17 00:27:09.633027 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:27:10.182728 kubelet[3368]: E0517 00:27:10.182449 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6fc77d4b98-pwrxt" podUID="bff9171d-67d4-4c78-9fc8-257a4f17dd49" May 17 00:27:12.180790 kubelet[3368]: E0517 00:27:12.180560 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-fm6b4" podUID="f87fae28-48af-42f8-92bd-1ecd569fff56" May 17 00:27:14.006926 systemd[1]: Started sshd@24-172.31.31.125:22-147.75.109.163:39178.service - OpenSSH per-connection server daemon (147.75.109.163:39178). May 17 00:27:14.226198 sshd[6894]: Accepted publickey for core from 147.75.109.163 port 39178 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:27:14.234400 sshd[6894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:27:14.243195 systemd-logind[2074]: New session 25 of user core. May 17 00:27:14.248898 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:27:15.174114 sshd[6894]: pam_unix(sshd:session): session closed for user core May 17 00:27:15.185591 systemd-logind[2074]: Session 25 logged out. Waiting for processes to exit. May 17 00:27:15.186245 systemd[1]: sshd@24-172.31.31.125:22-147.75.109.163:39178.service: Deactivated successfully. May 17 00:27:15.199961 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:27:15.208109 systemd-logind[2074]: Removed session 25. May 17 00:27:15.648739 systemd-journald[1569]: Under memory pressure, flushing caches. May 17 00:27:15.646673 systemd-resolved[1975]: Under memory pressure, flushing caches. May 17 00:27:15.646700 systemd-resolved[1975]: Flushed all caches. May 17 00:27:21.181157 kubelet[3368]: E0517 00:27:21.181086 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6fc77d4b98-pwrxt" podUID="bff9171d-67d4-4c78-9fc8-257a4f17dd49" May 17 00:27:21.866560 systemd[1]: run-containerd-runc-k8s.io-eee30c2c60c12bdac4be702c411f1aa3b775fe4826dc72af95040f2dcb129ff6-runc.adOUVg.mount: Deactivated successfully. May 17 00:27:27.181664 kubelet[3368]: E0517 00:27:27.181532 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-fm6b4" podUID="f87fae28-48af-42f8-92bd-1ecd569fff56" May 17 00:27:29.541948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8e4db560ef507652553ba59610b3fba56104601a16cf1636ce379c9c7a57109-rootfs.mount: Deactivated successfully. May 17 00:27:29.599970 containerd[2119]: time="2025-05-17T00:27:29.579891712Z" level=info msg="shim disconnected" id=f8e4db560ef507652553ba59610b3fba56104601a16cf1636ce379c9c7a57109 namespace=k8s.io May 17 00:27:29.603509 containerd[2119]: time="2025-05-17T00:27:29.599970419Z" level=warning msg="cleaning up after shim disconnected" id=f8e4db560ef507652553ba59610b3fba56104601a16cf1636ce379c9c7a57109 namespace=k8s.io May 17 00:27:29.603509 containerd[2119]: time="2025-05-17T00:27:29.599997363Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:27:29.809652 containerd[2119]: time="2025-05-17T00:27:29.809510947Z" level=info msg="shim disconnected" id=58031154b57a0820da55b549141a9a2974d99fe16e577cd33dc624ea0c2e3977 namespace=k8s.io May 17 00:27:29.809652 containerd[2119]: time="2025-05-17T00:27:29.809573794Z" level=warning msg="cleaning up after shim disconnected" id=58031154b57a0820da55b549141a9a2974d99fe16e577cd33dc624ea0c2e3977 namespace=k8s.io May 17 00:27:29.809652 containerd[2119]: time="2025-05-17T00:27:29.809651393Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:27:29.811017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58031154b57a0820da55b549141a9a2974d99fe16e577cd33dc624ea0c2e3977-rootfs.mount: Deactivated successfully. May 17 00:27:30.399609 kubelet[3368]: I0517 00:27:30.394505 3368 scope.go:117] "RemoveContainer" containerID="f8e4db560ef507652553ba59610b3fba56104601a16cf1636ce379c9c7a57109" May 17 00:27:30.403966 kubelet[3368]: I0517 00:27:30.403848 3368 scope.go:117] "RemoveContainer" containerID="58031154b57a0820da55b549141a9a2974d99fe16e577cd33dc624ea0c2e3977" May 17 00:27:30.440290 containerd[2119]: time="2025-05-17T00:27:30.440191857Z" level=info msg="CreateContainer within sandbox \"abbb044e12017a0daef52eb470801ffc65664c9897124462e9ae52c1603e12cd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:27:30.451244 containerd[2119]: time="2025-05-17T00:27:30.451182983Z" level=info msg="CreateContainer within sandbox \"bfaa8554fcc731f01b386ee2354ee1180856134d3c6c60595b3ff1fb23ef0ea7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 17 00:27:30.588270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4145869511.mount: Deactivated successfully. May 17 00:27:30.617608 containerd[2119]: time="2025-05-17T00:27:30.616770385Z" level=info msg="CreateContainer within sandbox \"abbb044e12017a0daef52eb470801ffc65664c9897124462e9ae52c1603e12cd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ec6f5f9d27828d7d51f9f34d425e4213cd70f8ec26fab0e44e40748f94eb3495\"" May 17 00:27:30.619182 containerd[2119]: time="2025-05-17T00:27:30.619113984Z" level=info msg="CreateContainer within sandbox \"bfaa8554fcc731f01b386ee2354ee1180856134d3c6c60595b3ff1fb23ef0ea7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8a732a700215c3b1b569f2b51932278bf7e15e80897b59ba2a50ddaf2561a7ff\"" May 17 00:27:30.622213 containerd[2119]: time="2025-05-17T00:27:30.622188395Z" level=info msg="StartContainer for \"8a732a700215c3b1b569f2b51932278bf7e15e80897b59ba2a50ddaf2561a7ff\"" May 17 00:27:30.623723 containerd[2119]: time="2025-05-17T00:27:30.623497489Z" level=info msg="StartContainer for \"ec6f5f9d27828d7d51f9f34d425e4213cd70f8ec26fab0e44e40748f94eb3495\"" May 17 00:27:30.758939 containerd[2119]: time="2025-05-17T00:27:30.758683683Z" level=info msg="StartContainer for \"8a732a700215c3b1b569f2b51932278bf7e15e80897b59ba2a50ddaf2561a7ff\" returns successfully" May 17 00:27:30.759119 containerd[2119]: time="2025-05-17T00:27:30.759091128Z" level=info msg="StartContainer for \"ec6f5f9d27828d7d51f9f34d425e4213cd70f8ec26fab0e44e40748f94eb3495\" returns successfully" May 17 00:27:33.781424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40605d88f4b54d50524775ede31df9e755b7182370ca5e56b09861c6028bbee9-rootfs.mount: Deactivated successfully. May 17 00:27:33.784142 containerd[2119]: time="2025-05-17T00:27:33.782845750Z" level=info msg="shim disconnected" id=40605d88f4b54d50524775ede31df9e755b7182370ca5e56b09861c6028bbee9 namespace=k8s.io May 17 00:27:33.784142 containerd[2119]: time="2025-05-17T00:27:33.782934414Z" level=warning msg="cleaning up after shim disconnected" id=40605d88f4b54d50524775ede31df9e755b7182370ca5e56b09861c6028bbee9 namespace=k8s.io May 17 00:27:33.784142 containerd[2119]: time="2025-05-17T00:27:33.782948566Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:27:34.398107 kubelet[3368]: I0517 00:27:34.398072 3368 scope.go:117] "RemoveContainer" containerID="40605d88f4b54d50524775ede31df9e755b7182370ca5e56b09861c6028bbee9" May 17 00:27:34.412511 containerd[2119]: time="2025-05-17T00:27:34.412465472Z" level=info msg="CreateContainer within sandbox \"31d39baced9c8d36a4b980976dfc6ae0da331ca1499e04332dee70e11e577100\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:27:34.450889 containerd[2119]: time="2025-05-17T00:27:34.450830865Z" level=info msg="CreateContainer within sandbox \"31d39baced9c8d36a4b980976dfc6ae0da331ca1499e04332dee70e11e577100\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3aeae57ef203539bec7e1eb0c9427921b579c4b20c07b4207bc814e2a715725b\"" May 17 00:27:34.451355 containerd[2119]: time="2025-05-17T00:27:34.451335187Z" level=info msg="StartContainer for \"3aeae57ef203539bec7e1eb0c9427921b579c4b20c07b4207bc814e2a715725b\"" May 17 00:27:34.536634 containerd[2119]: time="2025-05-17T00:27:34.535386035Z" level=info msg="StartContainer for \"3aeae57ef203539bec7e1eb0c9427921b579c4b20c07b4207bc814e2a715725b\" returns successfully" May 17 00:27:36.180621 kubelet[3368]: E0517 00:27:36.180513 3368 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6fc77d4b98-pwrxt" podUID="bff9171d-67d4-4c78-9fc8-257a4f17dd49" May 17 00:27:37.664475 kubelet[3368]: E0517 00:27:37.664400 3368 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-125?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"