May 17 00:24:50.909206 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:24:50.909231 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:24:50.909243 kernel: BIOS-provided physical RAM map: May 17 00:24:50.909250 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:24:50.909256 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 17 00:24:50.909262 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 May 17 00:24:50.909270 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved May 17 00:24:50.909277 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 17 00:24:50.909284 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 17 00:24:50.909293 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 17 00:24:50.909299 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 17 00:24:50.909306 kernel: NX (Execute Disable) protection: active May 17 00:24:50.909313 kernel: APIC: Static calls initialized May 17 00:24:50.909320 kernel: efi: EFI v2.7 by EDK II May 17 00:24:50.909328 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 17 00:24:50.909338 kernel: SMBIOS 2.7 present. May 17 00:24:50.909346 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 17 00:24:50.909353 kernel: Hypervisor detected: KVM May 17 00:24:50.909361 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:24:50.909368 kernel: kvm-clock: using sched offset of 4297721456 cycles May 17 00:24:50.909377 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:24:50.909384 kernel: tsc: Detected 2499.998 MHz processor May 17 00:24:50.909392 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:24:50.909400 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:24:50.909408 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 17 00:24:50.909418 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 17 00:24:50.909425 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:24:50.909433 kernel: Using GB pages for direct mapping May 17 00:24:50.909440 kernel: Secure boot disabled May 17 00:24:50.909448 kernel: ACPI: Early table checksum verification disabled May 17 00:24:50.909456 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 17 00:24:50.909463 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 17 00:24:50.909471 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 17 00:24:50.909479 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 17 00:24:50.909503 kernel: ACPI: FACS 0x00000000789D0000 000040 May 17 00:24:50.909517 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 17 00:24:50.909525 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 17 00:24:50.909532 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 17 00:24:50.909540 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 17 00:24:50.909548 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 17 00:24:50.909560 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:24:50.909570 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:24:50.909578 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 17 00:24:50.909587 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 17 00:24:50.909595 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 17 00:24:50.909603 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 17 00:24:50.909611 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 17 00:24:50.909619 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 17 00:24:50.909629 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 17 00:24:50.909637 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 17 00:24:50.909645 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 17 00:24:50.909653 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 17 00:24:50.909661 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 17 00:24:50.909670 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 17 00:24:50.909678 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:24:50.909686 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:24:50.909694 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 17 00:24:50.909705 kernel: NUMA: Initialized distance table, cnt=1 May 17 00:24:50.909712 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 17 00:24:50.909721 kernel: Zone ranges: May 17 00:24:50.909729 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:24:50.909737 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 17 00:24:50.909745 kernel: Normal empty May 17 00:24:50.909753 kernel: Movable zone start for each node May 17 00:24:50.909761 kernel: Early memory node ranges May 17 00:24:50.909769 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:24:50.909780 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 17 00:24:50.909788 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 17 00:24:50.909796 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 17 00:24:50.909804 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:24:50.909812 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:24:50.909820 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 17 00:24:50.909829 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 17 00:24:50.909837 kernel: ACPI: PM-Timer IO Port: 0xb008 May 17 00:24:50.909845 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:24:50.909856 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 17 00:24:50.909864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:24:50.909872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:24:50.909880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:24:50.909888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:24:50.909896 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:24:50.909905 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:24:50.909913 kernel: TSC deadline timer available May 17 00:24:50.909921 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:24:50.909929 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:24:50.909939 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 17 00:24:50.909948 kernel: Booting paravirtualized kernel on KVM May 17 00:24:50.909956 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:24:50.909964 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:24:50.909972 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:24:50.909980 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:24:50.909988 kernel: pcpu-alloc: [0] 0 1 May 17 00:24:50.909996 kernel: kvm-guest: PV spinlocks enabled May 17 00:24:50.910004 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:24:50.910016 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:24:50.910024 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:24:50.910032 kernel: random: crng init done May 17 00:24:50.910040 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:24:50.910049 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:24:50.910057 kernel: Fallback order for Node 0: 0 May 17 00:24:50.910065 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 17 00:24:50.910073 kernel: Policy zone: DMA32 May 17 00:24:50.910084 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:24:50.910093 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 162936K reserved, 0K cma-reserved) May 17 00:24:50.910101 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:24:50.910109 kernel: Kernel/User page tables isolation: enabled May 17 00:24:50.910117 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:24:50.910125 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:24:50.910133 kernel: Dynamic Preempt: voluntary May 17 00:24:50.910142 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:24:50.910150 kernel: rcu: RCU event tracing is enabled. May 17 00:24:50.910162 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:24:50.910170 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:24:50.910178 kernel: Rude variant of Tasks RCU enabled. May 17 00:24:50.910186 kernel: Tracing variant of Tasks RCU enabled. May 17 00:24:50.910195 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:24:50.910203 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:24:50.910212 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:24:50.910231 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:24:50.910239 kernel: Console: colour dummy device 80x25 May 17 00:24:50.910248 kernel: printk: console [tty0] enabled May 17 00:24:50.910256 kernel: printk: console [ttyS0] enabled May 17 00:24:50.910265 kernel: ACPI: Core revision 20230628 May 17 00:24:50.910277 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 17 00:24:50.910285 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:24:50.910294 kernel: x2apic enabled May 17 00:24:50.910303 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:24:50.910312 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 17 00:24:50.910323 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) May 17 00:24:50.910332 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:24:50.910340 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:24:50.910349 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:24:50.910358 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:24:50.910366 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:24:50.910375 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:24:50.910384 kernel: RETBleed: Vulnerable May 17 00:24:50.910393 kernel: Speculative Store Bypass: Vulnerable May 17 00:24:50.910404 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:24:50.910412 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:24:50.910421 kernel: GDS: Unknown: Dependent on hypervisor status May 17 00:24:50.910429 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:24:50.910438 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:24:50.910446 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:24:50.910455 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 00:24:50.910464 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 00:24:50.910472 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:24:50.910493 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:24:50.910502 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:24:50.910513 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:24:50.910522 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:24:50.910531 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 00:24:50.910539 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 00:24:50.910548 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 17 00:24:50.910556 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 17 00:24:50.910565 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 17 00:24:50.910574 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 17 00:24:50.910582 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 17 00:24:50.910591 kernel: Freeing SMP alternatives memory: 32K May 17 00:24:50.910599 kernel: pid_max: default: 32768 minimum: 301 May 17 00:24:50.910608 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:24:50.910619 kernel: landlock: Up and running. May 17 00:24:50.910627 kernel: SELinux: Initializing. May 17 00:24:50.910636 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:24:50.910645 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:24:50.910653 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:24:50.910662 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:24:50.910671 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:24:50.910680 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:24:50.910689 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:24:50.910698 kernel: signal: max sigframe size: 3632 May 17 00:24:50.910709 kernel: rcu: Hierarchical SRCU implementation. May 17 00:24:50.910718 kernel: rcu: Max phase no-delay instances is 400. May 17 00:24:50.910727 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:24:50.910735 kernel: smp: Bringing up secondary CPUs ... May 17 00:24:50.910744 kernel: smpboot: x86: Booting SMP configuration: May 17 00:24:50.910753 kernel: .... node #0, CPUs: #1 May 17 00:24:50.910762 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 17 00:24:50.910772 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:24:50.910783 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:24:50.910791 kernel: smpboot: Max logical packages: 1 May 17 00:24:50.910800 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) May 17 00:24:50.910809 kernel: devtmpfs: initialized May 17 00:24:50.910817 kernel: x86/mm: Memory block size: 128MB May 17 00:24:50.910826 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 17 00:24:50.910835 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:24:50.910844 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:24:50.910852 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:24:50.910864 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:24:50.910873 kernel: audit: initializing netlink subsys (disabled) May 17 00:24:50.910881 kernel: audit: type=2000 audit(1747441490.721:1): state=initialized audit_enabled=0 res=1 May 17 00:24:50.910890 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:24:50.910898 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:24:50.910907 kernel: cpuidle: using governor menu May 17 00:24:50.910916 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:24:50.910925 kernel: dca service started, version 1.12.1 May 17 00:24:50.910933 kernel: PCI: Using configuration type 1 for base access May 17 00:24:50.910945 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:24:50.910954 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:24:50.910962 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:24:50.910971 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:24:50.910980 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:24:50.910988 kernel: ACPI: Added _OSI(Module Device) May 17 00:24:50.910997 kernel: ACPI: Added _OSI(Processor Device) May 17 00:24:50.911006 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:24:50.911015 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:24:50.911026 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 17 00:24:50.911035 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:24:50.911044 kernel: ACPI: Interpreter enabled May 17 00:24:50.911052 kernel: ACPI: PM: (supports S0 S5) May 17 00:24:50.911061 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:24:50.911070 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:24:50.911078 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:24:50.911087 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:24:50.911096 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:24:50.911251 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:24:50.911350 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 17 00:24:50.911441 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 17 00:24:50.911452 kernel: acpiphp: Slot [3] registered May 17 00:24:50.911461 kernel: acpiphp: Slot [4] registered May 17 00:24:50.911470 kernel: acpiphp: Slot [5] registered May 17 00:24:50.911478 kernel: acpiphp: Slot [6] registered May 17 00:24:50.911513 kernel: acpiphp: Slot [7] registered May 17 00:24:50.911522 kernel: acpiphp: Slot [8] registered May 17 00:24:50.911530 kernel: acpiphp: Slot [9] registered May 17 00:24:50.911539 kernel: acpiphp: Slot [10] registered May 17 00:24:50.911548 kernel: acpiphp: Slot [11] registered May 17 00:24:50.911556 kernel: acpiphp: Slot [12] registered May 17 00:24:50.911565 kernel: acpiphp: Slot [13] registered May 17 00:24:50.911574 kernel: acpiphp: Slot [14] registered May 17 00:24:50.911582 kernel: acpiphp: Slot [15] registered May 17 00:24:50.911591 kernel: acpiphp: Slot [16] registered May 17 00:24:50.911602 kernel: acpiphp: Slot [17] registered May 17 00:24:50.911610 kernel: acpiphp: Slot [18] registered May 17 00:24:50.911619 kernel: acpiphp: Slot [19] registered May 17 00:24:50.911628 kernel: acpiphp: Slot [20] registered May 17 00:24:50.911636 kernel: acpiphp: Slot [21] registered May 17 00:24:50.911645 kernel: acpiphp: Slot [22] registered May 17 00:24:50.911654 kernel: acpiphp: Slot [23] registered May 17 00:24:50.911662 kernel: acpiphp: Slot [24] registered May 17 00:24:50.911671 kernel: acpiphp: Slot [25] registered May 17 00:24:50.911682 kernel: acpiphp: Slot [26] registered May 17 00:24:50.911690 kernel: acpiphp: Slot [27] registered May 17 00:24:50.911699 kernel: acpiphp: Slot [28] registered May 17 00:24:50.911708 kernel: acpiphp: Slot [29] registered May 17 00:24:50.911716 kernel: acpiphp: Slot [30] registered May 17 00:24:50.911725 kernel: acpiphp: Slot [31] registered May 17 00:24:50.911733 kernel: PCI host bridge to bus 0000:00 May 17 00:24:50.911832 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:24:50.911915 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:24:50.911999 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:24:50.912079 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:24:50.912159 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 17 00:24:50.912239 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:24:50.912342 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:24:50.912446 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:24:50.912561 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 17 00:24:50.912652 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 17 00:24:50.912741 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 17 00:24:50.912831 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 17 00:24:50.912919 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 17 00:24:50.913008 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 17 00:24:50.913096 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 17 00:24:50.913189 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 17 00:24:50.913283 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 17 00:24:50.913372 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 17 00:24:50.913461 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 17 00:24:50.913573 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 17 00:24:50.913663 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:24:50.913758 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 17 00:24:50.913852 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 17 00:24:50.913946 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 17 00:24:50.914035 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 17 00:24:50.914047 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:24:50.914057 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:24:50.914065 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:24:50.914074 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:24:50.914086 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:24:50.914095 kernel: iommu: Default domain type: Translated May 17 00:24:50.914104 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:24:50.914112 kernel: efivars: Registered efivars operations May 17 00:24:50.914121 kernel: PCI: Using ACPI for IRQ routing May 17 00:24:50.914130 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:24:50.914139 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 17 00:24:50.914147 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 17 00:24:50.914234 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 17 00:24:50.914326 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 17 00:24:50.914416 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:24:50.914427 kernel: vgaarb: loaded May 17 00:24:50.914436 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 17 00:24:50.914445 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 17 00:24:50.914454 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:24:50.914463 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:24:50.914472 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:24:50.914874 kernel: pnp: PnP ACPI init May 17 00:24:50.914894 kernel: pnp: PnP ACPI: found 5 devices May 17 00:24:50.914903 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:24:50.914913 kernel: NET: Registered PF_INET protocol family May 17 00:24:50.914922 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:24:50.914931 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:24:50.914940 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:24:50.914949 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:24:50.914958 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:24:50.914967 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:24:50.914978 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:24:50.914987 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:24:50.914996 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:24:50.915005 kernel: NET: Registered PF_XDP protocol family May 17 00:24:50.915353 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:24:50.915447 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:24:50.915543 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:24:50.915625 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:24:50.915711 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 17 00:24:50.915809 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:24:50.915821 kernel: PCI: CLS 0 bytes, default 64 May 17 00:24:50.915830 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:24:50.915839 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 17 00:24:50.915849 kernel: clocksource: Switched to clocksource tsc May 17 00:24:50.915858 kernel: Initialise system trusted keyrings May 17 00:24:50.915867 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:24:50.915876 kernel: Key type asymmetric registered May 17 00:24:50.915887 kernel: Asymmetric key parser 'x509' registered May 17 00:24:50.915896 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:24:50.915905 kernel: io scheduler mq-deadline registered May 17 00:24:50.915914 kernel: io scheduler kyber registered May 17 00:24:50.915922 kernel: io scheduler bfq registered May 17 00:24:50.915931 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:24:50.915940 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:24:50.915949 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:24:50.915958 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:24:50.915969 kernel: i8042: Warning: Keylock active May 17 00:24:50.915978 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:24:50.915987 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:24:50.916084 kernel: rtc_cmos 00:00: RTC can wake from S4 May 17 00:24:50.916171 kernel: rtc_cmos 00:00: registered as rtc0 May 17 00:24:50.916256 kernel: rtc_cmos 00:00: setting system clock to 2025-05-17T00:24:50 UTC (1747441490) May 17 00:24:50.916339 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 17 00:24:50.916351 kernel: intel_pstate: CPU model not supported May 17 00:24:50.916362 kernel: efifb: probing for efifb May 17 00:24:50.916371 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k May 17 00:24:50.916380 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 00:24:50.916389 kernel: efifb: scrolling: redraw May 17 00:24:50.916398 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:24:50.916407 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:24:50.916416 kernel: fb0: EFI VGA frame buffer device May 17 00:24:50.916424 kernel: pstore: Using crash dump compression: deflate May 17 00:24:50.916433 kernel: pstore: Registered efi_pstore as persistent store backend May 17 00:24:50.916445 kernel: NET: Registered PF_INET6 protocol family May 17 00:24:50.916454 kernel: Segment Routing with IPv6 May 17 00:24:50.916462 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:24:50.916471 kernel: NET: Registered PF_PACKET protocol family May 17 00:24:50.916494 kernel: Key type dns_resolver registered May 17 00:24:50.916503 kernel: IPI shorthand broadcast: enabled May 17 00:24:50.916533 kernel: sched_clock: Marking stable (467001589, 127850176)->(660400106, -65548341) May 17 00:24:50.916545 kernel: registered taskstats version 1 May 17 00:24:50.916555 kernel: Loading compiled-in X.509 certificates May 17 00:24:50.916567 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:24:50.916576 kernel: Key type .fscrypt registered May 17 00:24:50.916585 kernel: Key type fscrypt-provisioning registered May 17 00:24:50.916594 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:24:50.916603 kernel: ima: Allocated hash algorithm: sha1 May 17 00:24:50.916613 kernel: ima: No architecture policies found May 17 00:24:50.916622 kernel: clk: Disabling unused clocks May 17 00:24:50.916631 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:24:50.916641 kernel: Write protecting the kernel read-only data: 36864k May 17 00:24:50.916652 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:24:50.916662 kernel: Run /init as init process May 17 00:24:50.916671 kernel: with arguments: May 17 00:24:50.916683 kernel: /init May 17 00:24:50.916692 kernel: with environment: May 17 00:24:50.916701 kernel: HOME=/ May 17 00:24:50.916710 kernel: TERM=linux May 17 00:24:50.916719 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:24:50.916731 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:24:50.916745 systemd[1]: Detected virtualization amazon. May 17 00:24:50.916755 systemd[1]: Detected architecture x86-64. May 17 00:24:50.916764 systemd[1]: Running in initrd. May 17 00:24:50.916774 systemd[1]: No hostname configured, using default hostname. May 17 00:24:50.916783 systemd[1]: Hostname set to . May 17 00:24:50.917138 systemd[1]: Initializing machine ID from VM UUID. May 17 00:24:50.917153 systemd[1]: Queued start job for default target initrd.target. May 17 00:24:50.917163 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:24:50.917172 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:24:50.917183 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:24:50.917193 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:24:50.917203 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:24:50.917213 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:24:50.917226 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:24:50.917236 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:24:50.917246 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:24:50.917256 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:24:50.917266 systemd[1]: Reached target paths.target - Path Units. May 17 00:24:50.917278 systemd[1]: Reached target slices.target - Slice Units. May 17 00:24:50.917288 systemd[1]: Reached target swap.target - Swaps. May 17 00:24:50.917297 systemd[1]: Reached target timers.target - Timer Units. May 17 00:24:50.917307 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:24:50.917317 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:24:50.917327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:24:50.917337 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:24:50.917347 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:24:50.917357 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:24:50.917369 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:24:50.917379 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:24:50.917389 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:24:50.917398 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:24:50.917408 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:24:50.917418 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:24:50.917428 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:24:50.917438 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:24:50.917450 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:24:50.917459 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:24:50.917573 systemd-journald[178]: Collecting audit messages is disabled. May 17 00:24:50.917596 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:24:50.917610 systemd-journald[178]: Journal started May 17 00:24:50.917631 systemd-journald[178]: Runtime Journal (/run/log/journal/ec23a67b8d24ae402e00d6fc9c952195) is 4.7M, max 38.2M, 33.4M free. May 17 00:24:50.920686 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:24:50.920531 systemd-modules-load[179]: Inserted module 'overlay' May 17 00:24:50.921740 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:24:50.931630 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:24:50.933769 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:24:50.944533 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:24:50.956551 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:24:50.959024 systemd-modules-load[179]: Inserted module 'br_netfilter' May 17 00:24:50.959550 kernel: Bridge firewalling registered May 17 00:24:50.960689 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:24:50.961379 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:24:50.962476 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:24:50.963108 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:24:50.967643 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:24:50.969627 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:24:50.979662 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:24:50.983619 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:24:50.985761 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:24:50.986361 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:24:50.994065 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:24:51.005444 dracut-cmdline[214]: dracut-dracut-053 May 17 00:24:51.009107 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:24:51.010441 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:24:51.018632 systemd-resolved[212]: Positive Trust Anchors: May 17 00:24:51.019319 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:24:51.019357 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:24:51.022271 systemd-resolved[212]: Defaulting to hostname 'linux'. May 17 00:24:51.023224 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:24:51.024727 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:24:51.087522 kernel: SCSI subsystem initialized May 17 00:24:51.097585 kernel: Loading iSCSI transport class v2.0-870. May 17 00:24:51.108509 kernel: iscsi: registered transport (tcp) May 17 00:24:51.129801 kernel: iscsi: registered transport (qla4xxx) May 17 00:24:51.129878 kernel: QLogic iSCSI HBA Driver May 17 00:24:51.168882 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:24:51.174697 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:24:51.201146 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:24:51.201227 kernel: device-mapper: uevent: version 1.0.3 May 17 00:24:51.201251 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:24:51.244530 kernel: raid6: avx512x4 gen() 18136 MB/s May 17 00:24:51.262509 kernel: raid6: avx512x2 gen() 18076 MB/s May 17 00:24:51.280513 kernel: raid6: avx512x1 gen() 18086 MB/s May 17 00:24:51.298506 kernel: raid6: avx2x4 gen() 18011 MB/s May 17 00:24:51.315508 kernel: raid6: avx2x2 gen() 17982 MB/s May 17 00:24:51.332740 kernel: raid6: avx2x1 gen() 13658 MB/s May 17 00:24:51.332790 kernel: raid6: using algorithm avx512x4 gen() 18136 MB/s May 17 00:24:51.352586 kernel: raid6: .... xor() 8120 MB/s, rmw enabled May 17 00:24:51.352655 kernel: raid6: using avx512x2 recovery algorithm May 17 00:24:51.374524 kernel: xor: automatically using best checksumming function avx May 17 00:24:51.537529 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:24:51.548245 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:24:51.557731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:24:51.570695 systemd-udevd[398]: Using default interface naming scheme 'v255'. May 17 00:24:51.575718 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:24:51.582665 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:24:51.602151 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation May 17 00:24:51.631728 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:24:51.637984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:24:51.687398 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:24:51.694912 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:24:51.725738 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:24:51.731330 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:24:51.733395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:24:51.733957 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:24:51.741698 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:24:51.765626 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:24:51.788520 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:24:51.811603 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 17 00:24:51.811899 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 17 00:24:51.813217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:24:51.818648 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:24:51.818696 kernel: AES CTR mode by8 optimization enabled May 17 00:24:51.818715 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 17 00:24:51.814905 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:24:51.820478 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:24:51.820998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:24:51.821193 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:24:51.822324 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:24:51.833513 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:5c:17:5b:d1:85 May 17 00:24:51.832031 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:24:51.845683 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. May 17 00:24:51.855090 kernel: nvme nvme0: pci function 0000:00:04.0 May 17 00:24:51.855325 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:24:51.859446 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:24:51.860388 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:24:51.868519 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 17 00:24:51.871779 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:24:51.880539 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:24:51.880603 kernel: GPT:9289727 != 16777215 May 17 00:24:51.881563 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:24:51.882878 kernel: GPT:9289727 != 16777215 May 17 00:24:51.882923 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:24:51.882944 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:24:51.894593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:24:51.903722 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:24:51.920434 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:24:51.966215 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (443) May 17 00:24:52.000825 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 17 00:24:52.009543 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (456) May 17 00:24:52.011116 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:24:52.039086 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 17 00:24:52.045202 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 17 00:24:52.045972 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 17 00:24:52.051685 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:24:52.059948 disk-uuid[631]: Primary Header is updated. May 17 00:24:52.059948 disk-uuid[631]: Secondary Entries is updated. May 17 00:24:52.059948 disk-uuid[631]: Secondary Header is updated. May 17 00:24:52.067561 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:24:52.071505 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:24:53.084601 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:24:53.086111 disk-uuid[632]: The operation has completed successfully. May 17 00:24:53.230245 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:24:53.230369 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:24:53.246695 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:24:53.252640 sh[975]: Success May 17 00:24:53.273659 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:24:53.388044 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:24:53.395585 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:24:53.398520 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:24:53.429720 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:24:53.430044 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:24:53.430063 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:24:53.432937 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:24:53.433009 kernel: BTRFS info (device dm-0): using free space tree May 17 00:24:53.550508 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:24:53.575476 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:24:53.576507 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:24:53.580683 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:24:53.582660 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:24:53.613383 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:24:53.613451 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:24:53.613473 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:24:53.621528 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:24:53.631804 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:24:53.633878 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:24:53.640299 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:24:53.648711 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:24:53.678930 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:24:53.688750 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:24:53.708447 systemd-networkd[1167]: lo: Link UP May 17 00:24:53.708460 systemd-networkd[1167]: lo: Gained carrier May 17 00:24:53.710199 systemd-networkd[1167]: Enumeration completed May 17 00:24:53.710604 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:24:53.710668 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:24:53.710673 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:24:53.712764 systemd[1]: Reached target network.target - Network. May 17 00:24:53.714226 systemd-networkd[1167]: eth0: Link UP May 17 00:24:53.714231 systemd-networkd[1167]: eth0: Gained carrier May 17 00:24:53.714242 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:24:53.723567 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.23.228/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:24:54.056466 ignition[1116]: Ignition 2.19.0 May 17 00:24:54.056530 ignition[1116]: Stage: fetch-offline May 17 00:24:54.056772 ignition[1116]: no configs at "/usr/lib/ignition/base.d" May 17 00:24:54.056782 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:54.057014 ignition[1116]: Ignition finished successfully May 17 00:24:54.058265 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:24:54.063689 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:24:54.079269 ignition[1175]: Ignition 2.19.0 May 17 00:24:54.079282 ignition[1175]: Stage: fetch May 17 00:24:54.079801 ignition[1175]: no configs at "/usr/lib/ignition/base.d" May 17 00:24:54.079816 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:54.079935 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:54.130325 ignition[1175]: PUT result: OK May 17 00:24:54.142346 ignition[1175]: parsed url from cmdline: "" May 17 00:24:54.142356 ignition[1175]: no config URL provided May 17 00:24:54.142364 ignition[1175]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:24:54.142376 ignition[1175]: no config at "/usr/lib/ignition/user.ign" May 17 00:24:54.142394 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:54.149662 ignition[1175]: PUT result: OK May 17 00:24:54.149758 ignition[1175]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 17 00:24:54.150896 ignition[1175]: GET result: OK May 17 00:24:54.150999 ignition[1175]: parsing config with SHA512: 197e2dad9886730e0a6f4d72c49942e07ba4babb3a77f0705d9419113ec21e687aee91bc6a6749f9113798ec3ec8f7dcb2635284b692af6c1906637b9fe14d6f May 17 00:24:54.156515 unknown[1175]: fetched base config from "system" May 17 00:24:54.157368 ignition[1175]: fetch: fetch complete May 17 00:24:54.156537 unknown[1175]: fetched base config from "system" May 17 00:24:54.157377 ignition[1175]: fetch: fetch passed May 17 00:24:54.156546 unknown[1175]: fetched user config from "aws" May 17 00:24:54.157438 ignition[1175]: Ignition finished successfully May 17 00:24:54.160017 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:24:54.171784 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:24:54.187353 ignition[1182]: Ignition 2.19.0 May 17 00:24:54.187368 ignition[1182]: Stage: kargs May 17 00:24:54.187890 ignition[1182]: no configs at "/usr/lib/ignition/base.d" May 17 00:24:54.187904 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:54.188027 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:54.189107 ignition[1182]: PUT result: OK May 17 00:24:54.192016 ignition[1182]: kargs: kargs passed May 17 00:24:54.192092 ignition[1182]: Ignition finished successfully May 17 00:24:54.195063 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:24:54.200747 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:24:54.215566 ignition[1188]: Ignition 2.19.0 May 17 00:24:54.215581 ignition[1188]: Stage: disks May 17 00:24:54.216074 ignition[1188]: no configs at "/usr/lib/ignition/base.d" May 17 00:24:54.216089 ignition[1188]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:54.216213 ignition[1188]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:54.217301 ignition[1188]: PUT result: OK May 17 00:24:54.221381 ignition[1188]: disks: disks passed May 17 00:24:54.221443 ignition[1188]: Ignition finished successfully May 17 00:24:54.222503 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:24:54.223319 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:24:54.223706 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:24:54.224195 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:24:54.224818 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:24:54.225395 systemd[1]: Reached target basic.target - Basic System. May 17 00:24:54.230763 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:24:54.270658 systemd-fsck[1196]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:24:54.274065 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:24:54.278625 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:24:54.386507 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:24:54.387535 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:24:54.388687 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:24:54.395631 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:24:54.399312 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:24:54.401265 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:24:54.402575 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:24:54.402615 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:24:54.417002 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:24:54.422699 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:24:54.426152 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1215) May 17 00:24:54.431671 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:24:54.431734 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:24:54.431748 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:24:54.440524 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:24:54.441774 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:24:54.854240 systemd-networkd[1167]: eth0: Gained IPv6LL May 17 00:24:54.881362 initrd-setup-root[1239]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:24:54.896852 initrd-setup-root[1246]: cut: /sysroot/etc/group: No such file or directory May 17 00:24:54.902188 initrd-setup-root[1253]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:24:54.919413 initrd-setup-root[1260]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:24:55.190937 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:24:55.195603 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:24:55.197147 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:24:55.206253 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:24:55.208521 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:24:55.232222 ignition[1328]: INFO : Ignition 2.19.0 May 17 00:24:55.233595 ignition[1328]: INFO : Stage: mount May 17 00:24:55.233595 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:24:55.233595 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:55.233595 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:55.235247 ignition[1328]: INFO : PUT result: OK May 17 00:24:55.238139 ignition[1328]: INFO : mount: mount passed May 17 00:24:55.238139 ignition[1328]: INFO : Ignition finished successfully May 17 00:24:55.239783 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:24:55.244617 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:24:55.247356 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:24:55.256715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:24:55.281505 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1340) May 17 00:24:55.285786 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:24:55.285849 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:24:55.285863 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:24:55.294522 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:24:55.296633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:24:55.319439 ignition[1357]: INFO : Ignition 2.19.0 May 17 00:24:55.319439 ignition[1357]: INFO : Stage: files May 17 00:24:55.320826 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:24:55.320826 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:55.320826 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:55.322092 ignition[1357]: INFO : PUT result: OK May 17 00:24:55.323731 ignition[1357]: DEBUG : files: compiled without relabeling support, skipping May 17 00:24:55.334530 ignition[1357]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:24:55.335277 ignition[1357]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:24:55.353264 ignition[1357]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:24:55.354085 ignition[1357]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:24:55.354085 ignition[1357]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:24:55.353821 unknown[1357]: wrote ssh authorized keys file for user: core May 17 00:24:55.356249 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:24:55.357056 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:24:55.463862 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:24:55.707723 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:24:55.707723 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:24:55.709461 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:24:55.709461 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:24:55.709461 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:24:55.709461 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:24:55.709461 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:24:55.709461 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:24:55.709461 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:24:55.714131 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:24:55.714131 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:24:55.714131 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:24:55.714131 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:24:55.714131 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:24:55.714131 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:24:56.311514 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:24:56.767149 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:24:56.767149 ignition[1357]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:24:56.769764 ignition[1357]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:24:56.771175 ignition[1357]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:24:56.771175 ignition[1357]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:24:56.771175 ignition[1357]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:24:56.771175 ignition[1357]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:24:56.771175 ignition[1357]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:24:56.771175 ignition[1357]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:24:56.771175 ignition[1357]: INFO : files: files passed May 17 00:24:56.771175 ignition[1357]: INFO : Ignition finished successfully May 17 00:24:56.772296 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:24:56.779749 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:24:56.781819 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:24:56.788858 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:24:56.788979 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:24:56.798466 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:24:56.800938 initrd-setup-root-after-ignition[1386]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:24:56.802373 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:24:56.802961 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:24:56.804909 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:24:56.809792 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:24:56.845782 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:24:56.845925 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:24:56.847853 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:24:56.848345 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:24:56.849186 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:24:56.856681 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:24:56.869837 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:24:56.873871 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:24:56.887766 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:24:56.888433 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:24:56.889418 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:24:56.890395 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:24:56.890598 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:24:56.891793 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:24:56.892651 systemd[1]: Stopped target basic.target - Basic System. May 17 00:24:56.893450 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:24:56.894329 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:24:56.895124 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:24:56.895926 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:24:56.896707 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:24:56.897651 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:24:56.898759 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:24:56.899526 systemd[1]: Stopped target swap.target - Swaps. May 17 00:24:56.900237 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:24:56.900416 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:24:56.901669 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:24:56.902409 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:24:56.903118 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:24:56.903859 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:24:56.904320 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:24:56.904593 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:24:56.906078 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:24:56.906263 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:24:56.906978 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:24:56.907128 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:24:56.913807 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:24:56.915391 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:24:56.916309 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:24:56.920690 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:24:56.921350 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:24:56.921661 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:24:56.922447 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:24:56.926442 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:24:56.933140 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:24:56.933300 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:24:56.943512 ignition[1410]: INFO : Ignition 2.19.0 May 17 00:24:56.943512 ignition[1410]: INFO : Stage: umount May 17 00:24:56.945802 ignition[1410]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:24:56.945802 ignition[1410]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:24:56.945802 ignition[1410]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:24:56.945802 ignition[1410]: INFO : PUT result: OK May 17 00:24:56.951239 ignition[1410]: INFO : umount: umount passed May 17 00:24:56.951239 ignition[1410]: INFO : Ignition finished successfully May 17 00:24:56.953409 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:24:56.953573 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:24:56.954259 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:24:56.954324 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:24:56.954862 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:24:56.954913 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:24:56.955420 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:24:56.955471 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:24:56.957232 systemd[1]: Stopped target network.target - Network. May 17 00:24:56.957911 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:24:56.957975 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:24:56.958579 systemd[1]: Stopped target paths.target - Path Units. May 17 00:24:56.959112 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:24:56.959182 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:24:56.959746 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:24:56.961041 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:24:56.961598 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:24:56.961660 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:24:56.962152 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:24:56.962201 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:24:56.963897 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:24:56.963963 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:24:56.964384 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:24:56.964443 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:24:56.965163 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:24:56.966123 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:24:56.968277 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:24:56.969096 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:24:56.969211 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:24:56.969599 systemd-networkd[1167]: eth0: DHCPv6 lease lost May 17 00:24:56.971263 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:24:56.971372 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:24:56.972744 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:24:56.972881 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:24:56.976576 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:24:56.976729 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:24:56.978627 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:24:56.978702 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:24:56.986655 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:24:56.987297 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:24:56.987376 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:24:56.988029 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:24:56.988091 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:24:56.988626 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:24:56.988682 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:24:56.989311 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:24:56.989367 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:24:56.990112 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:24:57.007168 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:24:57.007280 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:24:57.011243 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:24:57.011435 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:24:57.012701 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:24:57.012760 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:24:57.013760 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:24:57.013805 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:24:57.014585 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:24:57.014654 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:24:57.015723 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:24:57.015784 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:24:57.016864 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:24:57.016927 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:24:57.024735 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:24:57.025375 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:24:57.025460 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:24:57.026185 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:24:57.026247 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:24:57.033217 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:24:57.033355 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:24:57.034564 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:24:57.045926 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:24:57.054014 systemd[1]: Switching root. May 17 00:24:57.092772 systemd-journald[178]: Journal stopped May 17 00:24:58.763602 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). May 17 00:24:58.763694 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:24:58.763725 kernel: SELinux: policy capability open_perms=1 May 17 00:24:58.763742 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:24:58.763765 kernel: SELinux: policy capability always_check_network=0 May 17 00:24:58.763786 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:24:58.763803 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:24:58.763820 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:24:58.763838 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:24:58.763856 kernel: audit: type=1403 audit(1747441497.603:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:24:58.763875 systemd[1]: Successfully loaded SELinux policy in 45.598ms. May 17 00:24:58.763906 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.431ms. May 17 00:24:58.763928 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:24:58.763950 systemd[1]: Detected virtualization amazon. May 17 00:24:58.763974 systemd[1]: Detected architecture x86-64. May 17 00:24:58.763993 systemd[1]: Detected first boot. May 17 00:24:58.764020 systemd[1]: Initializing machine ID from VM UUID. May 17 00:24:58.764043 zram_generator::config[1453]: No configuration found. May 17 00:24:58.764066 systemd[1]: Populated /etc with preset unit settings. May 17 00:24:58.764090 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:24:58.764111 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:24:58.764131 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:24:58.764156 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:24:58.764176 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:24:58.764195 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:24:58.764216 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:24:58.764235 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:24:58.764253 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:24:58.764274 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:24:58.764293 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:24:58.764316 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:24:58.764334 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:24:58.764353 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:24:58.764379 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:24:58.764398 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:24:58.764415 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:24:58.764433 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:24:58.764452 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:24:58.764470 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:24:58.764518 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:24:58.764540 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:24:58.764560 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:24:58.764578 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:24:58.764598 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:24:58.764617 systemd[1]: Reached target slices.target - Slice Units. May 17 00:24:58.764635 systemd[1]: Reached target swap.target - Swaps. May 17 00:24:58.764654 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:24:58.764679 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:24:58.764697 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:24:58.764715 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:24:58.764734 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:24:58.764752 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:24:58.764771 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:24:58.764789 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:24:58.764807 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:24:58.764825 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:24:58.764846 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:24:58.764864 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:24:58.764882 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:24:58.764902 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:24:58.764924 systemd[1]: Reached target machines.target - Containers. May 17 00:24:58.764946 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:24:58.764966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:24:58.764986 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:24:58.765005 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:24:58.765027 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:24:58.765044 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:24:58.765062 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:24:58.765082 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:24:58.765106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:24:58.765125 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:24:58.765146 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:24:58.765167 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:24:58.765192 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:24:58.765213 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:24:58.765235 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:24:58.765255 kernel: fuse: init (API version 7.39) May 17 00:24:58.765276 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:24:58.765297 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:24:58.765318 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:24:58.765339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:24:58.765360 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:24:58.765383 systemd[1]: Stopped verity-setup.service. May 17 00:24:58.765404 kernel: loop: module loaded May 17 00:24:58.765425 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:24:58.765447 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:24:58.765479 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:24:58.768329 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:24:58.768354 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:24:58.768375 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:24:58.768396 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:24:58.768424 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:24:58.768445 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:24:58.768467 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:24:58.768545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:24:58.768600 systemd-journald[1538]: Collecting audit messages is disabled. May 17 00:24:58.768639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:24:58.768660 systemd-journald[1538]: Journal started May 17 00:24:58.768701 systemd-journald[1538]: Runtime Journal (/run/log/journal/ec23a67b8d24ae402e00d6fc9c952195) is 4.7M, max 38.2M, 33.4M free. May 17 00:24:58.423426 systemd[1]: Queued start job for default target multi-user.target. May 17 00:24:58.456924 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 17 00:24:58.457343 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:24:58.796154 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:24:58.804575 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:24:58.804626 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:24:58.804652 kernel: ACPI: bus type drm_connector registered May 17 00:24:58.778556 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:24:58.778733 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:24:58.780333 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:24:58.780520 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:24:58.782270 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:24:58.786234 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:24:58.790080 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:24:58.797616 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:24:58.812584 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:24:58.813380 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:24:58.819717 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:24:58.826244 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:24:58.828813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:24:58.833906 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:24:58.834964 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:24:58.835767 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:24:58.863162 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:24:58.863253 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:24:58.869432 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:24:58.878745 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:24:58.889709 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:24:58.892709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:24:58.896699 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:24:58.900684 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:24:58.902891 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:24:58.909758 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:24:58.912752 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:24:58.914472 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:24:58.916548 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:24:58.918118 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:24:58.927503 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:24:58.939578 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:24:58.942703 systemd-journald[1538]: Time spent on flushing to /var/log/journal/ec23a67b8d24ae402e00d6fc9c952195 is 103.189ms for 984 entries. May 17 00:24:58.942703 systemd-journald[1538]: System Journal (/var/log/journal/ec23a67b8d24ae402e00d6fc9c952195) is 8.0M, max 195.6M, 187.6M free. May 17 00:24:59.054598 systemd-journald[1538]: Received client request to flush runtime journal. May 17 00:24:59.054671 kernel: loop0: detected capacity change from 0 to 140768 May 17 00:24:58.941355 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:24:58.954734 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:24:58.991558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:24:58.998840 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:24:59.017583 udevadm[1594]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:24:59.059050 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:24:59.073658 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:24:59.081747 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:24:59.084167 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:24:59.085382 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:24:59.143210 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. May 17 00:24:59.146151 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. May 17 00:24:59.157516 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:24:59.159877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:24:59.182050 kernel: loop1: detected capacity change from 0 to 61336 May 17 00:24:59.233519 kernel: loop2: detected capacity change from 0 to 221472 May 17 00:24:59.418873 kernel: loop3: detected capacity change from 0 to 142488 May 17 00:24:59.547516 kernel: loop4: detected capacity change from 0 to 140768 May 17 00:24:59.595923 kernel: loop5: detected capacity change from 0 to 61336 May 17 00:24:59.635506 kernel: loop6: detected capacity change from 0 to 221472 May 17 00:24:59.692539 kernel: loop7: detected capacity change from 0 to 142488 May 17 00:24:59.714042 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 17 00:24:59.715023 (sd-merge)[1608]: Merged extensions into '/usr'. May 17 00:24:59.721387 systemd[1]: Reloading requested from client PID 1585 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:24:59.721784 systemd[1]: Reloading... May 17 00:24:59.804515 zram_generator::config[1631]: No configuration found. May 17 00:24:59.956556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:25:00.034863 systemd[1]: Reloading finished in 312 ms. May 17 00:25:00.063415 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:25:00.064427 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:25:00.074714 systemd[1]: Starting ensure-sysext.service... May 17 00:25:00.076987 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:25:00.080708 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:25:00.092111 systemd[1]: Reloading requested from client PID 1686 ('systemctl') (unit ensure-sysext.service)... May 17 00:25:00.092132 systemd[1]: Reloading... May 17 00:25:00.127415 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:25:00.132081 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:25:00.134202 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:25:00.134822 systemd-tmpfiles[1687]: ACLs are not supported, ignoring. May 17 00:25:00.135027 systemd-tmpfiles[1687]: ACLs are not supported, ignoring. May 17 00:25:00.141180 systemd-tmpfiles[1687]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:25:00.141348 systemd-tmpfiles[1687]: Skipping /boot May 17 00:25:00.176441 systemd-udevd[1688]: Using default interface naming scheme 'v255'. May 17 00:25:00.183662 systemd-tmpfiles[1687]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:25:00.184109 systemd-tmpfiles[1687]: Skipping /boot May 17 00:25:00.200533 zram_generator::config[1714]: No configuration found. May 17 00:25:00.382553 ldconfig[1581]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:25:00.406682 (udev-worker)[1757]: Network interface NamePolicy= disabled on kernel command line. May 17 00:25:00.505517 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:25:00.517375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:25:00.533567 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 17 00:25:00.541503 kernel: ACPI: button: Power Button [PWRF] May 17 00:25:00.551517 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 17 00:25:00.569063 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:25:00.569170 kernel: ACPI: button: Sleep Button [SLPF] May 17 00:25:00.635268 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1752) May 17 00:25:00.654741 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:25:00.788004 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:25:00.788439 systemd[1]: Reloading finished in 695 ms. May 17 00:25:00.831409 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:25:00.839922 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:25:00.844535 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:25:01.031036 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:25:01.042054 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:25:01.048871 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:25:01.053872 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:25:01.057805 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:25:01.058898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:25:01.069867 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:25:01.073523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:25:01.089872 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:25:01.099643 lvm[1880]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:25:01.099305 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:25:01.103805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:25:01.105203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:25:01.112718 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:25:01.128292 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:25:01.140899 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:25:01.145383 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:25:01.146712 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:25:01.153716 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:25:01.158819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:25:01.160568 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:25:01.167544 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:25:01.173217 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:25:01.173455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:25:01.174941 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:25:01.175142 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:25:01.176449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:25:01.177720 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:25:01.179365 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:25:01.180687 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:25:01.193673 systemd[1]: Finished ensure-sysext.service. May 17 00:25:01.201397 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:25:01.209227 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:25:01.211464 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:25:01.211581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:25:01.221967 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:25:01.238259 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:25:01.249365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:25:01.267639 augenrules[1919]: No rules May 17 00:25:01.271548 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:25:01.274254 lvm[1909]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:25:01.275297 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:25:01.289725 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:25:01.318738 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:25:01.319764 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:25:01.327272 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:25:01.334560 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:25:01.339162 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:25:01.410289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:25:01.442422 systemd-networkd[1893]: lo: Link UP May 17 00:25:01.442438 systemd-networkd[1893]: lo: Gained carrier May 17 00:25:01.444997 systemd-networkd[1893]: Enumeration completed May 17 00:25:01.445149 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:25:01.448419 systemd-networkd[1893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:25:01.448435 systemd-networkd[1893]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:25:01.457986 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:25:01.459101 systemd-networkd[1893]: eth0: Link UP May 17 00:25:01.460889 systemd-networkd[1893]: eth0: Gained carrier May 17 00:25:01.460922 systemd-networkd[1893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:25:01.461808 systemd-resolved[1894]: Positive Trust Anchors: May 17 00:25:01.461832 systemd-resolved[1894]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:25:01.461883 systemd-resolved[1894]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:25:01.475609 systemd-networkd[1893]: eth0: DHCPv4 address 172.31.23.228/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:25:01.477552 systemd-resolved[1894]: Defaulting to hostname 'linux'. May 17 00:25:01.480371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:25:01.481421 systemd[1]: Reached target network.target - Network. May 17 00:25:01.482441 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:25:01.483197 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:25:01.484406 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:25:01.485176 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:25:01.486461 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:25:01.487864 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:25:01.488527 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:25:01.489031 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:25:01.489082 systemd[1]: Reached target paths.target - Path Units. May 17 00:25:01.489640 systemd[1]: Reached target timers.target - Timer Units. May 17 00:25:01.491782 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:25:01.494608 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:25:01.501881 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:25:01.503593 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:25:01.504279 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:25:01.504879 systemd[1]: Reached target basic.target - Basic System. May 17 00:25:01.505743 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:25:01.505792 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:25:01.507132 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:25:01.511760 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:25:01.516735 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:25:01.519389 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:25:01.522736 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:25:01.523441 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:25:01.528007 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:25:01.532756 systemd[1]: Started ntpd.service - Network Time Service. May 17 00:25:01.538853 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:25:01.544716 jq[1946]: false May 17 00:25:01.553640 systemd[1]: Starting setup-oem.service - Setup OEM... May 17 00:25:01.556717 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:25:01.561716 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:25:01.576280 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:25:01.578812 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:25:01.580293 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:25:01.588761 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:25:01.593147 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:25:01.616064 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:25:01.616289 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:25:01.652037 jq[1958]: true May 17 00:25:01.658156 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:25:01.658387 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:25:01.714948 (ntainerd)[1974]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:25:01.722923 update_engine[1956]: I20250517 00:25:01.722817 1956 main.cc:92] Flatcar Update Engine starting May 17 00:25:01.723260 extend-filesystems[1947]: Found loop4 May 17 00:25:01.723260 extend-filesystems[1947]: Found loop5 May 17 00:25:01.723260 extend-filesystems[1947]: Found loop6 May 17 00:25:01.723260 extend-filesystems[1947]: Found loop7 May 17 00:25:01.725615 extend-filesystems[1947]: Found nvme0n1 May 17 00:25:01.725615 extend-filesystems[1947]: Found nvme0n1p1 May 17 00:25:01.725615 extend-filesystems[1947]: Found nvme0n1p2 May 17 00:25:01.725615 extend-filesystems[1947]: Found nvme0n1p3 May 17 00:25:01.734517 extend-filesystems[1947]: Found usr May 17 00:25:01.734517 extend-filesystems[1947]: Found nvme0n1p4 May 17 00:25:01.738816 extend-filesystems[1947]: Found nvme0n1p6 May 17 00:25:01.738816 extend-filesystems[1947]: Found nvme0n1p7 May 17 00:25:01.738816 extend-filesystems[1947]: Found nvme0n1p9 May 17 00:25:01.738816 extend-filesystems[1947]: Checking size of /dev/nvme0n1p9 May 17 00:25:01.748516 jq[1966]: true May 17 00:25:01.770906 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:25:01.771180 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:25:01.777027 dbus-daemon[1945]: [system] SELinux support is enabled May 17 00:25:01.777258 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:25:01.785840 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:25:01.785899 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:25:01.786466 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:25:01.786551 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:25:01.795774 extend-filesystems[1947]: Resized partition /dev/nvme0n1p9 May 17 00:25:01.808607 dbus-daemon[1945]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1893 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:25:01.824642 update_engine[1956]: I20250517 00:25:01.823814 1956 update_check_scheduler.cc:74] Next update check in 9m13s May 17 00:25:01.824264 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 17 00:25:01.826177 ntpd[1949]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:07:47 UTC 2025 (1): Starting May 17 00:25:01.826236 systemd[1]: Finished setup-oem.service - Setup OEM. May 17 00:25:01.828001 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:07:47 UTC 2025 (1): Starting May 17 00:25:01.828001 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:25:01.828001 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: ---------------------------------------------------- May 17 00:25:01.828001 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: ntp-4 is maintained by Network Time Foundation, May 17 00:25:01.828001 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:25:01.828001 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: corporation. Support and training for ntp-4 are May 17 00:25:01.828001 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: available at https://www.nwtime.org/support May 17 00:25:01.828001 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: ---------------------------------------------------- May 17 00:25:01.826238 ntpd[1949]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:25:01.828967 systemd[1]: Started update-engine.service - Update Engine. May 17 00:25:01.830825 extend-filesystems[1997]: resize2fs 1.47.1 (20-May-2024) May 17 00:25:01.826249 ntpd[1949]: ---------------------------------------------------- May 17 00:25:01.834380 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: proto: precision = 0.064 usec (-24) May 17 00:25:01.834380 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: basedate set to 2025-05-04 May 17 00:25:01.834380 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: gps base set to 2025-05-04 (week 2365) May 17 00:25:01.839593 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 17 00:25:01.826259 ntpd[1949]: ntp-4 is maintained by Network Time Foundation, May 17 00:25:01.826269 ntpd[1949]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:25:01.826278 ntpd[1949]: corporation. Support and training for ntp-4 are May 17 00:25:01.826288 ntpd[1949]: available at https://www.nwtime.org/support May 17 00:25:01.826299 ntpd[1949]: ---------------------------------------------------- May 17 00:25:01.831902 ntpd[1949]: proto: precision = 0.064 usec (-24) May 17 00:25:01.833778 ntpd[1949]: basedate set to 2025-05-04 May 17 00:25:01.833800 ntpd[1949]: gps base set to 2025-05-04 (week 2365) May 17 00:25:01.840712 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:25:01.840932 ntpd[1949]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:25:01.841030 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:25:01.841120 ntpd[1949]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:25:01.841183 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:25:01.841463 ntpd[1949]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:25:01.844507 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:25:01.844507 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: Listen normally on 3 eth0 172.31.23.228:123 May 17 00:25:01.844507 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: Listen normally on 4 lo [::1]:123 May 17 00:25:01.844507 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: bind(21) AF_INET6 fe80::45c:17ff:fe5b:d185%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:25:01.844507 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: unable to create socket on eth0 (5) for fe80::45c:17ff:fe5b:d185%2#123 May 17 00:25:01.844507 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: failed to init interface for address fe80::45c:17ff:fe5b:d185%2 May 17 00:25:01.844507 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: Listening on routing socket on fd #21 for interface updates May 17 00:25:01.844507 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:25:01.844507 ntpd[1949]: 17 May 00:25:01 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:25:01.842571 ntpd[1949]: Listen normally on 3 eth0 172.31.23.228:123 May 17 00:25:01.842617 ntpd[1949]: Listen normally on 4 lo [::1]:123 May 17 00:25:01.842671 ntpd[1949]: bind(21) AF_INET6 fe80::45c:17ff:fe5b:d185%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:25:01.842694 ntpd[1949]: unable to create socket on eth0 (5) for fe80::45c:17ff:fe5b:d185%2#123 May 17 00:25:01.842708 ntpd[1949]: failed to init interface for address fe80::45c:17ff:fe5b:d185%2 May 17 00:25:01.842748 ntpd[1949]: Listening on routing socket on fd #21 for interface updates May 17 00:25:01.844295 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:25:01.844322 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:25:01.847084 tar[1965]: linux-amd64/helm May 17 00:25:01.911555 coreos-metadata[1944]: May 17 00:25:01.910 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:25:01.916314 coreos-metadata[1944]: May 17 00:25:01.915 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 17 00:25:01.917628 coreos-metadata[1944]: May 17 00:25:01.917 INFO Fetch successful May 17 00:25:01.917730 coreos-metadata[1944]: May 17 00:25:01.917 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 17 00:25:01.921674 coreos-metadata[1944]: May 17 00:25:01.921 INFO Fetch successful May 17 00:25:01.921782 coreos-metadata[1944]: May 17 00:25:01.921 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 17 00:25:01.922779 coreos-metadata[1944]: May 17 00:25:01.922 INFO Fetch successful May 17 00:25:01.922872 coreos-metadata[1944]: May 17 00:25:01.922 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 17 00:25:01.924364 coreos-metadata[1944]: May 17 00:25:01.924 INFO Fetch successful May 17 00:25:01.924443 coreos-metadata[1944]: May 17 00:25:01.924 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 17 00:25:01.952511 coreos-metadata[1944]: May 17 00:25:01.926 INFO Fetch failed with 404: resource not found May 17 00:25:01.952511 coreos-metadata[1944]: May 17 00:25:01.926 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 17 00:25:01.952511 coreos-metadata[1944]: May 17 00:25:01.929 INFO Fetch successful May 17 00:25:01.952511 coreos-metadata[1944]: May 17 00:25:01.929 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 17 00:25:01.952511 coreos-metadata[1944]: May 17 00:25:01.929 INFO Fetch successful May 17 00:25:01.952511 coreos-metadata[1944]: May 17 00:25:01.930 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 17 00:25:01.952511 coreos-metadata[1944]: May 17 00:25:01.930 INFO Fetch successful May 17 00:25:01.952511 coreos-metadata[1944]: May 17 00:25:01.931 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 17 00:25:01.952511 coreos-metadata[1944]: May 17 00:25:01.934 INFO Fetch successful May 17 00:25:01.952511 coreos-metadata[1944]: May 17 00:25:01.934 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 17 00:25:01.952511 coreos-metadata[1944]: May 17 00:25:01.935 INFO Fetch successful May 17 00:25:01.971696 systemd-logind[1955]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:25:02.019962 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1753) May 17 00:25:01.971727 systemd-logind[1955]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 00:25:01.971800 systemd-logind[1955]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:25:01.972046 systemd-logind[1955]: New seat seat0. May 17 00:25:02.091442 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 17 00:25:01.979043 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:25:01.982684 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:25:01.986117 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:25:02.096015 extend-filesystems[1997]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:25:02.096015 extend-filesystems[1997]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:25:02.096015 extend-filesystems[1997]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 17 00:25:02.115367 extend-filesystems[1947]: Resized filesystem in /dev/nvme0n1p9 May 17 00:25:02.122617 bash[2021]: Updated "/home/core/.ssh/authorized_keys" May 17 00:25:02.098260 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:25:02.098535 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:25:02.124111 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:25:02.142895 systemd[1]: Starting sshkeys.service... May 17 00:25:02.215986 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:25:02.226017 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:25:02.356194 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:25:02.357818 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 17 00:25:02.362361 dbus-daemon[1945]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1998 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:25:02.370962 locksmithd[2000]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:25:02.376533 systemd[1]: Starting polkit.service - Authorization Manager... May 17 00:25:02.406090 polkitd[2127]: Started polkitd version 121 May 17 00:25:02.446373 polkitd[2127]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:25:02.446459 polkitd[2127]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:25:02.451351 polkitd[2127]: Finished loading, compiling and executing 2 rules May 17 00:25:02.456033 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:25:02.456253 systemd[1]: Started polkit.service - Authorization Manager. May 17 00:25:02.459052 polkitd[2127]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:25:02.522863 systemd-hostnamed[1998]: Hostname set to (transient) May 17 00:25:02.523377 systemd-resolved[1894]: System hostname changed to 'ip-172-31-23-228'. May 17 00:25:02.535583 coreos-metadata[2112]: May 17 00:25:02.535 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:25:02.539575 coreos-metadata[2112]: May 17 00:25:02.539 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 17 00:25:02.539837 coreos-metadata[2112]: May 17 00:25:02.539 INFO Fetch successful May 17 00:25:02.539933 coreos-metadata[2112]: May 17 00:25:02.539 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 17 00:25:02.543500 coreos-metadata[2112]: May 17 00:25:02.543 INFO Fetch successful May 17 00:25:02.546735 unknown[2112]: wrote ssh authorized keys file for user: core May 17 00:25:02.596155 update-ssh-keys[2144]: Updated "/home/core/.ssh/authorized_keys" May 17 00:25:02.597914 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:25:02.604535 systemd[1]: Finished sshkeys.service. May 17 00:25:02.630437 containerd[1974]: time="2025-05-17T00:25:02.630294583Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:25:02.677698 containerd[1974]: time="2025-05-17T00:25:02.677618879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:25:02.679835 containerd[1974]: time="2025-05-17T00:25:02.679784928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:25:02.679967 containerd[1974]: time="2025-05-17T00:25:02.679950590Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:25:02.680036 containerd[1974]: time="2025-05-17T00:25:02.680023526Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:25:02.680290 containerd[1974]: time="2025-05-17T00:25:02.680273541Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:25:02.680506 containerd[1974]: time="2025-05-17T00:25:02.680399721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:25:02.681636 containerd[1974]: time="2025-05-17T00:25:02.680570251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:25:02.681636 containerd[1974]: time="2025-05-17T00:25:02.680598478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:25:02.681636 containerd[1974]: time="2025-05-17T00:25:02.680827195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:25:02.681636 containerd[1974]: time="2025-05-17T00:25:02.680850069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:25:02.681636 containerd[1974]: time="2025-05-17T00:25:02.680870989Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:25:02.681636 containerd[1974]: time="2025-05-17T00:25:02.680886624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:25:02.681636 containerd[1974]: time="2025-05-17T00:25:02.680987590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:25:02.681636 containerd[1974]: time="2025-05-17T00:25:02.681229884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:25:02.681636 containerd[1974]: time="2025-05-17T00:25:02.681380378Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:25:02.681636 containerd[1974]: time="2025-05-17T00:25:02.681402624Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:25:02.681636 containerd[1974]: time="2025-05-17T00:25:02.681540602Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:25:02.682053 containerd[1974]: time="2025-05-17T00:25:02.681598318Z" level=info msg="metadata content store policy set" policy=shared May 17 00:25:02.689908 containerd[1974]: time="2025-05-17T00:25:02.689830916Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:25:02.690643 containerd[1974]: time="2025-05-17T00:25:02.690062550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:25:02.690643 containerd[1974]: time="2025-05-17T00:25:02.690134780Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:25:02.690643 containerd[1974]: time="2025-05-17T00:25:02.690163925Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:25:02.690643 containerd[1974]: time="2025-05-17T00:25:02.690187493Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:25:02.690643 containerd[1974]: time="2025-05-17T00:25:02.690374251Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:25:02.691128 containerd[1974]: time="2025-05-17T00:25:02.691104813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:25:02.691346 containerd[1974]: time="2025-05-17T00:25:02.691327793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:25:02.691429 containerd[1974]: time="2025-05-17T00:25:02.691412060Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693531932Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693570317Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693592442Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693612533Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693634035Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693657246Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693677085Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693698863Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693718536Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693748012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693768526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693787017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693817349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694191 containerd[1974]: time="2025-05-17T00:25:02.693835827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.693857357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.693876592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.693895962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.693918131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.693941313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.693960289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.694009517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.694031750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.694054189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.694089018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.694108755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:25:02.694757 containerd[1974]: time="2025-05-17T00:25:02.694125061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:25:02.696316 containerd[1974]: time="2025-05-17T00:25:02.695226470Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:25:02.696316 containerd[1974]: time="2025-05-17T00:25:02.695330861Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:25:02.696316 containerd[1974]: time="2025-05-17T00:25:02.695351318Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:25:02.696316 containerd[1974]: time="2025-05-17T00:25:02.695370536Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:25:02.696316 containerd[1974]: time="2025-05-17T00:25:02.695387475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:25:02.696316 containerd[1974]: time="2025-05-17T00:25:02.695413541Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:25:02.696316 containerd[1974]: time="2025-05-17T00:25:02.695428313Z" level=info msg="NRI interface is disabled by configuration." May 17 00:25:02.696316 containerd[1974]: time="2025-05-17T00:25:02.695444552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:25:02.703531 containerd[1974]: time="2025-05-17T00:25:02.700397857Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:25:02.703531 containerd[1974]: time="2025-05-17T00:25:02.700572525Z" level=info msg="Connect containerd service" May 17 00:25:02.703531 containerd[1974]: time="2025-05-17T00:25:02.700649526Z" level=info msg="using legacy CRI server" May 17 00:25:02.703531 containerd[1974]: time="2025-05-17T00:25:02.700668873Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:25:02.703531 containerd[1974]: time="2025-05-17T00:25:02.700820181Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:25:02.704883 containerd[1974]: time="2025-05-17T00:25:02.704840602Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:25:02.706781 containerd[1974]: time="2025-05-17T00:25:02.706630471Z" level=info msg="Start subscribing containerd event" May 17 00:25:02.706781 containerd[1974]: time="2025-05-17T00:25:02.706712742Z" level=info msg="Start recovering state" May 17 00:25:02.706911 containerd[1974]: time="2025-05-17T00:25:02.706797020Z" level=info msg="Start event monitor" May 17 00:25:02.706911 containerd[1974]: time="2025-05-17T00:25:02.706828209Z" level=info msg="Start snapshots syncer" May 17 00:25:02.706911 containerd[1974]: time="2025-05-17T00:25:02.706841496Z" level=info msg="Start cni network conf syncer for default" May 17 00:25:02.706911 containerd[1974]: time="2025-05-17T00:25:02.706855458Z" level=info msg="Start streaming server" May 17 00:25:02.707263 containerd[1974]: time="2025-05-17T00:25:02.707240278Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:25:02.707399 containerd[1974]: time="2025-05-17T00:25:02.707383063Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:25:02.710347 containerd[1974]: time="2025-05-17T00:25:02.709597609Z" level=info msg="containerd successfully booted in 0.081211s" May 17 00:25:02.709749 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:25:02.827782 ntpd[1949]: bind(24) AF_INET6 fe80::45c:17ff:fe5b:d185%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:25:02.828339 ntpd[1949]: 17 May 00:25:02 ntpd[1949]: bind(24) AF_INET6 fe80::45c:17ff:fe5b:d185%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:25:02.828339 ntpd[1949]: 17 May 00:25:02 ntpd[1949]: unable to create socket on eth0 (6) for fe80::45c:17ff:fe5b:d185%2#123 May 17 00:25:02.828339 ntpd[1949]: 17 May 00:25:02 ntpd[1949]: failed to init interface for address fe80::45c:17ff:fe5b:d185%2 May 17 00:25:02.828194 ntpd[1949]: unable to create socket on eth0 (6) for fe80::45c:17ff:fe5b:d185%2#123 May 17 00:25:02.828212 ntpd[1949]: failed to init interface for address fe80::45c:17ff:fe5b:d185%2 May 17 00:25:02.854344 sshd_keygen[1985]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:25:02.891771 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:25:02.899607 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:25:02.917528 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:25:02.917785 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:25:02.925594 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:25:02.942529 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:25:02.952804 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:25:02.959970 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:25:02.960905 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:25:03.067430 tar[1965]: linux-amd64/LICENSE May 17 00:25:03.067891 tar[1965]: linux-amd64/README.md May 17 00:25:03.079847 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:25:03.365692 systemd-networkd[1893]: eth0: Gained IPv6LL May 17 00:25:03.368437 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:25:03.369370 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:25:03.374768 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 17 00:25:03.378956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:03.382578 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:25:03.420151 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:25:03.429959 amazon-ssm-agent[2172]: Initializing new seelog logger May 17 00:25:03.432348 amazon-ssm-agent[2172]: New Seelog Logger Creation Complete May 17 00:25:03.432348 amazon-ssm-agent[2172]: 2025/05/17 00:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:25:03.432348 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:25:03.432348 amazon-ssm-agent[2172]: 2025/05/17 00:25:03 processing appconfig overrides May 17 00:25:03.432348 amazon-ssm-agent[2172]: 2025/05/17 00:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:25:03.432348 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:25:03.432348 amazon-ssm-agent[2172]: 2025/05/17 00:25:03 processing appconfig overrides May 17 00:25:03.432348 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO Proxy environment variables: May 17 00:25:03.432348 amazon-ssm-agent[2172]: 2025/05/17 00:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:25:03.432348 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:25:03.432348 amazon-ssm-agent[2172]: 2025/05/17 00:25:03 processing appconfig overrides May 17 00:25:03.438541 amazon-ssm-agent[2172]: 2025/05/17 00:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:25:03.438541 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:25:03.438541 amazon-ssm-agent[2172]: 2025/05/17 00:25:03 processing appconfig overrides May 17 00:25:03.531317 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO https_proxy: May 17 00:25:03.629777 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO http_proxy: May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO no_proxy: May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO Checking if agent identity type OnPrem can be assumed May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO Checking if agent identity type EC2 can be assumed May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO Agent will take identity from EC2 May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [amazon-ssm-agent] Starting Core Agent May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [Registrar] Starting registrar module May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [EC2Identity] EC2 registration was successful. May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [CredentialRefresher] credentialRefresher has started May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [CredentialRefresher] Starting credentials refresher loop May 17 00:25:03.661666 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 17 00:25:03.727909 amazon-ssm-agent[2172]: 2025-05-17 00:25:03 INFO [CredentialRefresher] Next credential rotation will be in 31.408326233333334 minutes May 17 00:25:04.294304 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:25:04.299908 systemd[1]: Started sshd@0-172.31.23.228:22-147.75.109.163:46664.service - OpenSSH per-connection server daemon (147.75.109.163:46664). May 17 00:25:04.486720 sshd[2190]: Accepted publickey for core from 147.75.109.163 port 46664 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:04.490063 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:04.501748 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:25:04.508982 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:25:04.514557 systemd-logind[1955]: New session 1 of user core. May 17 00:25:04.528946 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:25:04.536972 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:25:04.542974 (systemd)[2194]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:25:04.688563 amazon-ssm-agent[2172]: 2025-05-17 00:25:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 17 00:25:04.697388 systemd[2194]: Queued start job for default target default.target. May 17 00:25:04.703966 systemd[2194]: Created slice app.slice - User Application Slice. May 17 00:25:04.704013 systemd[2194]: Reached target paths.target - Paths. May 17 00:25:04.704034 systemd[2194]: Reached target timers.target - Timers. May 17 00:25:04.706687 systemd[2194]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:25:04.728237 systemd[2194]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:25:04.728400 systemd[2194]: Reached target sockets.target - Sockets. May 17 00:25:04.728425 systemd[2194]: Reached target basic.target - Basic System. May 17 00:25:04.728516 systemd[2194]: Reached target default.target - Main User Target. May 17 00:25:04.728561 systemd[2194]: Startup finished in 177ms. May 17 00:25:04.728751 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:25:04.734572 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:25:04.787754 amazon-ssm-agent[2172]: 2025-05-17 00:25:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2202) started May 17 00:25:04.891349 amazon-ssm-agent[2172]: 2025-05-17 00:25:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 17 00:25:04.895610 systemd[1]: Started sshd@1-172.31.23.228:22-147.75.109.163:46666.service - OpenSSH per-connection server daemon (147.75.109.163:46666). May 17 00:25:05.062524 sshd[2220]: Accepted publickey for core from 147.75.109.163 port 46666 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:05.063720 sshd[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:05.069239 systemd-logind[1955]: New session 2 of user core. May 17 00:25:05.075737 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:25:05.192963 sshd[2220]: pam_unix(sshd:session): session closed for user core May 17 00:25:05.197852 systemd[1]: sshd@1-172.31.23.228:22-147.75.109.163:46666.service: Deactivated successfully. May 17 00:25:05.200068 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:25:05.201266 systemd-logind[1955]: Session 2 logged out. Waiting for processes to exit. May 17 00:25:05.204117 systemd-logind[1955]: Removed session 2. May 17 00:25:05.230076 systemd[1]: Started sshd@2-172.31.23.228:22-147.75.109.163:46672.service - OpenSSH per-connection server daemon (147.75.109.163:46672). May 17 00:25:05.384026 sshd[2227]: Accepted publickey for core from 147.75.109.163 port 46672 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:05.385661 sshd[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:05.391174 systemd-logind[1955]: New session 3 of user core. May 17 00:25:05.397725 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:25:05.518797 sshd[2227]: pam_unix(sshd:session): session closed for user core May 17 00:25:05.523240 systemd[1]: sshd@2-172.31.23.228:22-147.75.109.163:46672.service: Deactivated successfully. May 17 00:25:05.525878 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:25:05.526946 systemd-logind[1955]: Session 3 logged out. Waiting for processes to exit. May 17 00:25:05.528642 systemd-logind[1955]: Removed session 3. May 17 00:25:05.738564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:05.739413 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:25:05.741616 systemd[1]: Startup finished in 595ms (kernel) + 6.904s (initrd) + 8.181s (userspace) = 15.682s. May 17 00:25:05.744835 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:25:05.826709 ntpd[1949]: Listen normally on 7 eth0 [fe80::45c:17ff:fe5b:d185%2]:123 May 17 00:25:05.827045 ntpd[1949]: 17 May 00:25:05 ntpd[1949]: Listen normally on 7 eth0 [fe80::45c:17ff:fe5b:d185%2]:123 May 17 00:25:06.960770 kubelet[2238]: E0517 00:25:06.960713 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:25:06.963914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:25:06.964113 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:25:06.964797 systemd[1]: kubelet.service: Consumed 1.078s CPU time. May 17 00:25:09.635803 systemd-resolved[1894]: Clock change detected. Flushing caches. May 17 00:25:16.360229 systemd[1]: Started sshd@3-172.31.23.228:22-147.75.109.163:59546.service - OpenSSH per-connection server daemon (147.75.109.163:59546). May 17 00:25:16.515995 sshd[2250]: Accepted publickey for core from 147.75.109.163 port 59546 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:16.517384 sshd[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:16.521785 systemd-logind[1955]: New session 4 of user core. May 17 00:25:16.531674 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:25:16.648910 sshd[2250]: pam_unix(sshd:session): session closed for user core May 17 00:25:16.652579 systemd[1]: sshd@3-172.31.23.228:22-147.75.109.163:59546.service: Deactivated successfully. May 17 00:25:16.654569 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:25:16.655968 systemd-logind[1955]: Session 4 logged out. Waiting for processes to exit. May 17 00:25:16.657281 systemd-logind[1955]: Removed session 4. May 17 00:25:16.681082 systemd[1]: Started sshd@4-172.31.23.228:22-147.75.109.163:59560.service - OpenSSH per-connection server daemon (147.75.109.163:59560). May 17 00:25:16.844488 sshd[2257]: Accepted publickey for core from 147.75.109.163 port 59560 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:16.845924 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:16.850969 systemd-logind[1955]: New session 5 of user core. May 17 00:25:16.861668 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:25:16.975498 sshd[2257]: pam_unix(sshd:session): session closed for user core May 17 00:25:16.978242 systemd[1]: sshd@4-172.31.23.228:22-147.75.109.163:59560.service: Deactivated successfully. May 17 00:25:16.979832 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:25:16.981112 systemd-logind[1955]: Session 5 logged out. Waiting for processes to exit. May 17 00:25:16.981985 systemd-logind[1955]: Removed session 5. May 17 00:25:17.009649 systemd[1]: Started sshd@5-172.31.23.228:22-147.75.109.163:59564.service - OpenSSH per-connection server daemon (147.75.109.163:59564). May 17 00:25:17.162983 sshd[2264]: Accepted publickey for core from 147.75.109.163 port 59564 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:17.164379 sshd[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:17.168780 systemd-logind[1955]: New session 6 of user core. May 17 00:25:17.180675 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:25:17.297335 sshd[2264]: pam_unix(sshd:session): session closed for user core May 17 00:25:17.300740 systemd[1]: sshd@5-172.31.23.228:22-147.75.109.163:59564.service: Deactivated successfully. May 17 00:25:17.302545 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:25:17.303231 systemd-logind[1955]: Session 6 logged out. Waiting for processes to exit. May 17 00:25:17.304294 systemd-logind[1955]: Removed session 6. May 17 00:25:17.330545 systemd[1]: Started sshd@6-172.31.23.228:22-147.75.109.163:59574.service - OpenSSH per-connection server daemon (147.75.109.163:59574). May 17 00:25:17.488409 sshd[2271]: Accepted publickey for core from 147.75.109.163 port 59574 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:17.489876 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:17.494005 systemd-logind[1955]: New session 7 of user core. May 17 00:25:17.509700 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:25:17.619752 sudo[2274]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:25:17.620033 sudo[2274]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:25:17.636250 sudo[2274]: pam_unix(sudo:session): session closed for user root May 17 00:25:17.659475 sshd[2271]: pam_unix(sshd:session): session closed for user core May 17 00:25:17.663939 systemd[1]: sshd@6-172.31.23.228:22-147.75.109.163:59574.service: Deactivated successfully. May 17 00:25:17.665749 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:25:17.666613 systemd-logind[1955]: Session 7 logged out. Waiting for processes to exit. May 17 00:25:17.667830 systemd-logind[1955]: Removed session 7. May 17 00:25:17.689268 systemd[1]: Started sshd@7-172.31.23.228:22-147.75.109.163:59580.service - OpenSSH per-connection server daemon (147.75.109.163:59580). May 17 00:25:17.796582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:25:17.805691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:17.839726 sshd[2279]: Accepted publickey for core from 147.75.109.163 port 59580 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:17.841210 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:17.845848 systemd-logind[1955]: New session 8 of user core. May 17 00:25:17.852604 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:25:17.950038 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:25:17.950456 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:25:17.954790 sudo[2286]: pam_unix(sudo:session): session closed for user root May 17 00:25:17.961466 sudo[2285]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:25:17.961892 sudo[2285]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:25:17.976804 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:25:17.980276 auditctl[2289]: No rules May 17 00:25:17.980689 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:25:17.980909 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:25:17.990049 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:25:18.024095 augenrules[2307]: No rules May 17 00:25:18.024103 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:25:18.026952 sudo[2285]: pam_unix(sudo:session): session closed for user root May 17 00:25:18.050830 sshd[2279]: pam_unix(sshd:session): session closed for user core May 17 00:25:18.055963 systemd[1]: sshd@7-172.31.23.228:22-147.75.109.163:59580.service: Deactivated successfully. May 17 00:25:18.059010 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:25:18.060202 systemd-logind[1955]: Session 8 logged out. Waiting for processes to exit. May 17 00:25:18.062342 systemd-logind[1955]: Removed session 8. May 17 00:25:18.089742 systemd[1]: Started sshd@8-172.31.23.228:22-147.75.109.163:41008.service - OpenSSH per-connection server daemon (147.75.109.163:41008). May 17 00:25:18.103328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:18.108487 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:25:18.147662 kubelet[2322]: E0517 00:25:18.147625 2322 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:25:18.151790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:25:18.151948 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:25:18.240649 sshd[2317]: Accepted publickey for core from 147.75.109.163 port 41008 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:25:18.242446 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:18.246696 systemd-logind[1955]: New session 9 of user core. May 17 00:25:18.257646 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:25:18.351820 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:25:18.352102 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:25:18.728819 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:25:18.728926 (dockerd)[2347]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:25:19.088802 dockerd[2347]: time="2025-05-17T00:25:19.088747246Z" level=info msg="Starting up" May 17 00:25:19.230648 dockerd[2347]: time="2025-05-17T00:25:19.230601679Z" level=info msg="Loading containers: start." May 17 00:25:19.351468 kernel: Initializing XFRM netlink socket May 17 00:25:19.381597 (udev-worker)[2371]: Network interface NamePolicy= disabled on kernel command line. May 17 00:25:19.445274 systemd-networkd[1893]: docker0: Link UP May 17 00:25:19.475275 dockerd[2347]: time="2025-05-17T00:25:19.475214887Z" level=info msg="Loading containers: done." May 17 00:25:19.499949 dockerd[2347]: time="2025-05-17T00:25:19.499892170Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:25:19.500177 dockerd[2347]: time="2025-05-17T00:25:19.500003663Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:25:19.500177 dockerd[2347]: time="2025-05-17T00:25:19.500108841Z" level=info msg="Daemon has completed initialization" May 17 00:25:19.544527 dockerd[2347]: time="2025-05-17T00:25:19.544184673Z" level=info msg="API listen on /run/docker.sock" May 17 00:25:19.544462 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:25:20.829443 containerd[1974]: time="2025-05-17T00:25:20.829375771Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:25:21.390032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010223468.mount: Deactivated successfully. May 17 00:25:22.629954 containerd[1974]: time="2025-05-17T00:25:22.629896258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:22.631031 containerd[1974]: time="2025-05-17T00:25:22.630984605Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 17 00:25:22.633275 containerd[1974]: time="2025-05-17T00:25:22.633222786Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:22.636326 containerd[1974]: time="2025-05-17T00:25:22.636258262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:22.637226 containerd[1974]: time="2025-05-17T00:25:22.637066748Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.807651325s" May 17 00:25:22.637226 containerd[1974]: time="2025-05-17T00:25:22.637102541Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:25:22.637974 containerd[1974]: time="2025-05-17T00:25:22.637839196Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:25:24.087202 containerd[1974]: time="2025-05-17T00:25:24.087148006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:24.088452 containerd[1974]: time="2025-05-17T00:25:24.088391485Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 17 00:25:24.089927 containerd[1974]: time="2025-05-17T00:25:24.089865911Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:24.093643 containerd[1974]: time="2025-05-17T00:25:24.093604773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:24.095647 containerd[1974]: time="2025-05-17T00:25:24.094962341Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.457095214s" May 17 00:25:24.095647 containerd[1974]: time="2025-05-17T00:25:24.095008030Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:25:24.095941 containerd[1974]: time="2025-05-17T00:25:24.095913275Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:25:25.315839 containerd[1974]: time="2025-05-17T00:25:25.315730097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:25.327750 containerd[1974]: time="2025-05-17T00:25:25.327685234Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 17 00:25:25.343023 containerd[1974]: time="2025-05-17T00:25:25.342927307Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:25.361158 containerd[1974]: time="2025-05-17T00:25:25.361071947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:25.362204 containerd[1974]: time="2025-05-17T00:25:25.362036940Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.266090968s" May 17 00:25:25.362204 containerd[1974]: time="2025-05-17T00:25:25.362074968Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:25:25.363065 containerd[1974]: time="2025-05-17T00:25:25.362906942Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:25:26.352219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1209263027.mount: Deactivated successfully. May 17 00:25:26.925243 containerd[1974]: time="2025-05-17T00:25:26.925189982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:26.929497 containerd[1974]: time="2025-05-17T00:25:26.929319912Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 17 00:25:26.933914 containerd[1974]: time="2025-05-17T00:25:26.933817272Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:26.939920 containerd[1974]: time="2025-05-17T00:25:26.939872671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:26.940567 containerd[1974]: time="2025-05-17T00:25:26.940406237Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.577468262s" May 17 00:25:26.940567 containerd[1974]: time="2025-05-17T00:25:26.940460501Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:25:26.941478 containerd[1974]: time="2025-05-17T00:25:26.941167090Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:25:27.437062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount109232898.mount: Deactivated successfully. May 17 00:25:28.296753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:25:28.306696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:28.476706 containerd[1974]: time="2025-05-17T00:25:28.475637725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:28.482079 containerd[1974]: time="2025-05-17T00:25:28.482018768Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:25:28.488997 containerd[1974]: time="2025-05-17T00:25:28.488920549Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:28.501879 containerd[1974]: time="2025-05-17T00:25:28.501807440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:28.503234 containerd[1974]: time="2025-05-17T00:25:28.503091316Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.561893106s" May 17 00:25:28.503234 containerd[1974]: time="2025-05-17T00:25:28.503127506Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:25:28.503934 containerd[1974]: time="2025-05-17T00:25:28.503757753Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:25:29.007543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:29.018839 (kubelet)[2616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:25:29.062760 kubelet[2616]: E0517 00:25:29.062707 2616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:25:29.065373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:25:29.065595 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:25:29.188494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236860015.mount: Deactivated successfully. May 17 00:25:29.196912 containerd[1974]: time="2025-05-17T00:25:29.196860928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:29.198173 containerd[1974]: time="2025-05-17T00:25:29.198117146Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:25:29.199970 containerd[1974]: time="2025-05-17T00:25:29.199918500Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:29.203475 containerd[1974]: time="2025-05-17T00:25:29.203385750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:29.204133 containerd[1974]: time="2025-05-17T00:25:29.204009921Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 700.220422ms" May 17 00:25:29.204133 containerd[1974]: time="2025-05-17T00:25:29.204045873Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:25:29.204721 containerd[1974]: time="2025-05-17T00:25:29.204580424Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:25:29.732895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159159381.mount: Deactivated successfully. May 17 00:25:31.733410 containerd[1974]: time="2025-05-17T00:25:31.733349015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:31.734592 containerd[1974]: time="2025-05-17T00:25:31.734536346Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 17 00:25:31.736280 containerd[1974]: time="2025-05-17T00:25:31.736227814Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:31.739529 containerd[1974]: time="2025-05-17T00:25:31.739485523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:31.740660 containerd[1974]: time="2025-05-17T00:25:31.740525508Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.535920261s" May 17 00:25:31.740660 containerd[1974]: time="2025-05-17T00:25:31.740554684Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:25:33.366555 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:25:34.980672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:34.986821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:35.019794 systemd[1]: Reloading requested from client PID 2712 ('systemctl') (unit session-9.scope)... May 17 00:25:35.019814 systemd[1]: Reloading... May 17 00:25:35.093944 zram_generator::config[2748]: No configuration found. May 17 00:25:35.260299 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:25:35.348964 systemd[1]: Reloading finished in 328 ms. May 17 00:25:35.407939 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:25:35.408149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:35.414861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:36.014689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:36.027867 (kubelet)[2816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:25:36.069450 kubelet[2816]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:25:36.069450 kubelet[2816]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:25:36.069450 kubelet[2816]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:25:36.069450 kubelet[2816]: I0517 00:25:36.068877 2816 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:25:36.441938 kubelet[2816]: I0517 00:25:36.441890 2816 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:25:36.441938 kubelet[2816]: I0517 00:25:36.441923 2816 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:25:36.444268 kubelet[2816]: I0517 00:25:36.444233 2816 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:25:36.498504 kubelet[2816]: E0517 00:25:36.498448 2816 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.228:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:36.498973 kubelet[2816]: I0517 00:25:36.498939 2816 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:25:36.522992 kubelet[2816]: E0517 00:25:36.522917 2816 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:25:36.522992 kubelet[2816]: I0517 00:25:36.522949 2816 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:25:36.527007 kubelet[2816]: I0517 00:25:36.526905 2816 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:25:36.530979 kubelet[2816]: I0517 00:25:36.530928 2816 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:25:36.531140 kubelet[2816]: I0517 00:25:36.531108 2816 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:25:36.531314 kubelet[2816]: I0517 00:25:36.531136 2816 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-228","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:25:36.531399 kubelet[2816]: I0517 00:25:36.531317 2816 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:25:36.531399 kubelet[2816]: I0517 00:25:36.531326 2816 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:25:36.531464 kubelet[2816]: I0517 00:25:36.531444 2816 state_mem.go:36] "Initialized new in-memory state store" May 17 00:25:36.536020 kubelet[2816]: I0517 00:25:36.535783 2816 kubelet.go:408] "Attempting to sync node with API server" May 17 00:25:36.536020 kubelet[2816]: I0517 00:25:36.535822 2816 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:25:36.536020 kubelet[2816]: I0517 00:25:36.535856 2816 kubelet.go:314] "Adding apiserver pod source" May 17 00:25:36.536020 kubelet[2816]: I0517 00:25:36.535873 2816 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:25:36.545150 kubelet[2816]: W0517 00:25:36.545016 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-228&limit=500&resourceVersion=0": dial tcp 172.31.23.228:6443: connect: connection refused May 17 00:25:36.545150 kubelet[2816]: E0517 00:25:36.545128 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-228&limit=500&resourceVersion=0\": dial tcp 172.31.23.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:36.545288 kubelet[2816]: I0517 00:25:36.545231 2816 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:25:36.547826 kubelet[2816]: W0517 00:25:36.547770 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.228:6443: connect: connection refused May 17 00:25:36.547826 kubelet[2816]: E0517 00:25:36.547829 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:36.549621 kubelet[2816]: I0517 00:25:36.549598 2816 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:25:36.549701 kubelet[2816]: W0517 00:25:36.549657 2816 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:25:36.550233 kubelet[2816]: I0517 00:25:36.550194 2816 server.go:1274] "Started kubelet" May 17 00:25:36.555603 kubelet[2816]: I0517 00:25:36.555547 2816 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:25:36.557112 kubelet[2816]: I0517 00:25:36.556385 2816 server.go:449] "Adding debug handlers to kubelet server" May 17 00:25:36.557112 kubelet[2816]: I0517 00:25:36.556542 2816 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:25:36.557112 kubelet[2816]: I0517 00:25:36.556851 2816 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:25:36.560576 kubelet[2816]: E0517 00:25:36.557042 2816 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.228:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.228:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-228.184028d1b3d5e041 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-228,UID:ip-172-31-23-228,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-228,},FirstTimestamp:2025-05-17 00:25:36.550166593 +0000 UTC m=+0.518653693,LastTimestamp:2025-05-17 00:25:36.550166593 +0000 UTC m=+0.518653693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-228,}" May 17 00:25:36.560576 kubelet[2816]: I0517 00:25:36.560749 2816 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:25:36.560576 kubelet[2816]: I0517 00:25:36.560964 2816 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:25:36.565741 kubelet[2816]: I0517 00:25:36.565305 2816 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:25:36.567989 kubelet[2816]: I0517 00:25:36.567956 2816 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:25:36.568085 kubelet[2816]: I0517 00:25:36.568027 2816 reconciler.go:26] "Reconciler: start to sync state" May 17 00:25:36.568379 kubelet[2816]: W0517 00:25:36.568343 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.228:6443: connect: connection refused May 17 00:25:36.568414 kubelet[2816]: E0517 00:25:36.568389 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:36.570658 kubelet[2816]: E0517 00:25:36.570608 2816 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-228\" not found" May 17 00:25:36.571215 kubelet[2816]: E0517 00:25:36.571001 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-228?timeout=10s\": dial tcp 172.31.23.228:6443: connect: connection refused" interval="200ms" May 17 00:25:36.571215 kubelet[2816]: I0517 00:25:36.571140 2816 factory.go:221] Registration of the systemd container factory successfully May 17 00:25:36.571781 kubelet[2816]: I0517 00:25:36.571652 2816 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:25:36.580295 kubelet[2816]: E0517 00:25:36.580211 2816 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:25:36.581022 kubelet[2816]: I0517 00:25:36.580979 2816 factory.go:221] Registration of the containerd container factory successfully May 17 00:25:36.603146 kubelet[2816]: I0517 00:25:36.603066 2816 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:25:36.603146 kubelet[2816]: I0517 00:25:36.603083 2816 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:25:36.603146 kubelet[2816]: I0517 00:25:36.603100 2816 state_mem.go:36] "Initialized new in-memory state store" May 17 00:25:36.605659 kubelet[2816]: I0517 00:25:36.605616 2816 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:25:36.608377 kubelet[2816]: I0517 00:25:36.607224 2816 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:25:36.608377 kubelet[2816]: I0517 00:25:36.607370 2816 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:25:36.608377 kubelet[2816]: I0517 00:25:36.607404 2816 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:25:36.608377 kubelet[2816]: E0517 00:25:36.607592 2816 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:25:36.608790 kubelet[2816]: W0517 00:25:36.608746 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.228:6443: connect: connection refused May 17 00:25:36.608831 kubelet[2816]: E0517 00:25:36.608814 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:36.608887 kubelet[2816]: I0517 00:25:36.608877 2816 policy_none.go:49] "None policy: Start" May 17 00:25:36.610015 kubelet[2816]: I0517 00:25:36.609995 2816 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:25:36.610096 kubelet[2816]: I0517 00:25:36.610030 2816 state_mem.go:35] "Initializing new in-memory state store" May 17 00:25:36.624698 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:25:36.637227 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:25:36.640386 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:25:36.653642 kubelet[2816]: I0517 00:25:36.653610 2816 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:25:36.654051 kubelet[2816]: I0517 00:25:36.654037 2816 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:25:36.654564 kubelet[2816]: I0517 00:25:36.654523 2816 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:25:36.657336 kubelet[2816]: I0517 00:25:36.656981 2816 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:25:36.657902 kubelet[2816]: E0517 00:25:36.657883 2816 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-228\" not found" May 17 00:25:36.720145 systemd[1]: Created slice kubepods-burstable-pod6fe575af699b14accfbe6fbd89850d73.slice - libcontainer container kubepods-burstable-pod6fe575af699b14accfbe6fbd89850d73.slice. May 17 00:25:36.740060 systemd[1]: Created slice kubepods-burstable-pod57fbf7c520c86adf73736250868a624b.slice - libcontainer container kubepods-burstable-pod57fbf7c520c86adf73736250868a624b.slice. May 17 00:25:36.757037 kubelet[2816]: I0517 00:25:36.756949 2816 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-228" May 17 00:25:36.757455 kubelet[2816]: E0517 00:25:36.757328 2816 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.228:6443/api/v1/nodes\": dial tcp 172.31.23.228:6443: connect: connection refused" node="ip-172-31-23-228" May 17 00:25:36.759268 systemd[1]: Created slice kubepods-burstable-pode546bd5e3899b87f8d9bf69b6a73d679.slice - libcontainer container kubepods-burstable-pode546bd5e3899b87f8d9bf69b6a73d679.slice. May 17 00:25:36.772146 kubelet[2816]: E0517 00:25:36.772075 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-228?timeout=10s\": dial tcp 172.31.23.228:6443: connect: connection refused" interval="400ms" May 17 00:25:36.869753 kubelet[2816]: I0517 00:25:36.869712 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fe575af699b14accfbe6fbd89850d73-ca-certs\") pod \"kube-apiserver-ip-172-31-23-228\" (UID: \"6fe575af699b14accfbe6fbd89850d73\") " pod="kube-system/kube-apiserver-ip-172-31-23-228" May 17 00:25:36.869753 kubelet[2816]: I0517 00:25:36.869751 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fe575af699b14accfbe6fbd89850d73-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-228\" (UID: \"6fe575af699b14accfbe6fbd89850d73\") " pod="kube-system/kube-apiserver-ip-172-31-23-228" May 17 00:25:36.869753 kubelet[2816]: I0517 00:25:36.869773 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/57fbf7c520c86adf73736250868a624b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-228\" (UID: \"57fbf7c520c86adf73736250868a624b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-228" May 17 00:25:36.869960 kubelet[2816]: I0517 00:25:36.869788 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57fbf7c520c86adf73736250868a624b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-228\" (UID: \"57fbf7c520c86adf73736250868a624b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-228" May 17 00:25:36.869960 kubelet[2816]: I0517 00:25:36.869824 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57fbf7c520c86adf73736250868a624b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-228\" (UID: \"57fbf7c520c86adf73736250868a624b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-228" May 17 00:25:36.869960 kubelet[2816]: I0517 00:25:36.869840 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57fbf7c520c86adf73736250868a624b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-228\" (UID: \"57fbf7c520c86adf73736250868a624b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-228" May 17 00:25:36.869960 kubelet[2816]: I0517 00:25:36.869857 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fe575af699b14accfbe6fbd89850d73-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-228\" (UID: \"6fe575af699b14accfbe6fbd89850d73\") " pod="kube-system/kube-apiserver-ip-172-31-23-228" May 17 00:25:36.869960 kubelet[2816]: I0517 00:25:36.869876 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57fbf7c520c86adf73736250868a624b-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-228\" (UID: \"57fbf7c520c86adf73736250868a624b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-228" May 17 00:25:36.870088 kubelet[2816]: I0517 00:25:36.869891 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e546bd5e3899b87f8d9bf69b6a73d679-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-228\" (UID: \"e546bd5e3899b87f8d9bf69b6a73d679\") " pod="kube-system/kube-scheduler-ip-172-31-23-228" May 17 00:25:36.959412 kubelet[2816]: I0517 00:25:36.959377 2816 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-228" May 17 00:25:36.959699 kubelet[2816]: E0517 00:25:36.959676 2816 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.228:6443/api/v1/nodes\": dial tcp 172.31.23.228:6443: connect: connection refused" node="ip-172-31-23-228" May 17 00:25:37.038790 containerd[1974]: time="2025-05-17T00:25:37.038615550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-228,Uid:6fe575af699b14accfbe6fbd89850d73,Namespace:kube-system,Attempt:0,}" May 17 00:25:37.063661 containerd[1974]: time="2025-05-17T00:25:37.063536483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-228,Uid:e546bd5e3899b87f8d9bf69b6a73d679,Namespace:kube-system,Attempt:0,}" May 17 00:25:37.063866 containerd[1974]: time="2025-05-17T00:25:37.063542657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-228,Uid:57fbf7c520c86adf73736250868a624b,Namespace:kube-system,Attempt:0,}" May 17 00:25:37.173444 kubelet[2816]: E0517 00:25:37.173376 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-228?timeout=10s\": dial tcp 172.31.23.228:6443: connect: connection refused" interval="800ms" May 17 00:25:37.361881 kubelet[2816]: I0517 00:25:37.361854 2816 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-228" May 17 00:25:37.362276 kubelet[2816]: E0517 00:25:37.362116 2816 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.228:6443/api/v1/nodes\": dial tcp 172.31.23.228:6443: connect: connection refused" node="ip-172-31-23-228" May 17 00:25:37.507261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449690286.mount: Deactivated successfully. May 17 00:25:37.517806 containerd[1974]: time="2025-05-17T00:25:37.517755043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:25:37.518924 containerd[1974]: time="2025-05-17T00:25:37.518884752Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:25:37.520351 containerd[1974]: time="2025-05-17T00:25:37.520309557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:25:37.521530 containerd[1974]: time="2025-05-17T00:25:37.521500174Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:25:37.522628 containerd[1974]: time="2025-05-17T00:25:37.522588045Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:25:37.524006 containerd[1974]: time="2025-05-17T00:25:37.523959457Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:25:37.524975 containerd[1974]: time="2025-05-17T00:25:37.524930241Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:25:37.527280 containerd[1974]: time="2025-05-17T00:25:37.527195796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:25:37.528356 containerd[1974]: time="2025-05-17T00:25:37.527997130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 464.384392ms" May 17 00:25:37.529040 containerd[1974]: time="2025-05-17T00:25:37.528997452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 465.234643ms" May 17 00:25:37.533273 kubelet[2816]: W0517 00:25:37.533213 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-228&limit=500&resourceVersion=0": dial tcp 172.31.23.228:6443: connect: connection refused May 17 00:25:37.533377 kubelet[2816]: E0517 00:25:37.533283 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-228&limit=500&resourceVersion=0\": dial tcp 172.31.23.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:37.535049 containerd[1974]: time="2025-05-17T00:25:37.535001300Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.305468ms" May 17 00:25:37.686083 kubelet[2816]: W0517 00:25:37.685943 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.228:6443: connect: connection refused May 17 00:25:37.686639 kubelet[2816]: E0517 00:25:37.686502 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:37.727280 containerd[1974]: time="2025-05-17T00:25:37.727056629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:37.727280 containerd[1974]: time="2025-05-17T00:25:37.727110516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:37.727280 containerd[1974]: time="2025-05-17T00:25:37.727126246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:37.727280 containerd[1974]: time="2025-05-17T00:25:37.727195998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:37.732708 containerd[1974]: time="2025-05-17T00:25:37.732457337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:37.733446 containerd[1974]: time="2025-05-17T00:25:37.731202716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:37.733446 containerd[1974]: time="2025-05-17T00:25:37.732836333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:37.733446 containerd[1974]: time="2025-05-17T00:25:37.732850121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:37.733446 containerd[1974]: time="2025-05-17T00:25:37.732556229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:37.733446 containerd[1974]: time="2025-05-17T00:25:37.732569938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:37.733805 containerd[1974]: time="2025-05-17T00:25:37.733775662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:37.734615 containerd[1974]: time="2025-05-17T00:25:37.734543433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:37.756616 systemd[1]: Started cri-containerd-1298a3a140b59d78ce99026de9bf6be1a890620dac404ac53b6738fda66315f8.scope - libcontainer container 1298a3a140b59d78ce99026de9bf6be1a890620dac404ac53b6738fda66315f8. May 17 00:25:37.769926 systemd[1]: Started cri-containerd-10dbb15031ee9db1ec4394fc4e0c2415afc61fc358f2e7d222a09a904501676a.scope - libcontainer container 10dbb15031ee9db1ec4394fc4e0c2415afc61fc358f2e7d222a09a904501676a. May 17 00:25:37.771989 systemd[1]: Started cri-containerd-31ae02f252b8d2bf82ed57445631efed4715051458ef2a7a8333088346d0bc21.scope - libcontainer container 31ae02f252b8d2bf82ed57445631efed4715051458ef2a7a8333088346d0bc21. May 17 00:25:37.836995 containerd[1974]: time="2025-05-17T00:25:37.836414479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-228,Uid:e546bd5e3899b87f8d9bf69b6a73d679,Namespace:kube-system,Attempt:0,} returns sandbox id \"10dbb15031ee9db1ec4394fc4e0c2415afc61fc358f2e7d222a09a904501676a\"" May 17 00:25:37.843141 containerd[1974]: time="2025-05-17T00:25:37.842674063Z" level=info msg="CreateContainer within sandbox \"10dbb15031ee9db1ec4394fc4e0c2415afc61fc358f2e7d222a09a904501676a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:25:37.844414 containerd[1974]: time="2025-05-17T00:25:37.844246361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-228,Uid:57fbf7c520c86adf73736250868a624b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1298a3a140b59d78ce99026de9bf6be1a890620dac404ac53b6738fda66315f8\"" May 17 00:25:37.849139 containerd[1974]: time="2025-05-17T00:25:37.848916228Z" level=info msg="CreateContainer within sandbox \"1298a3a140b59d78ce99026de9bf6be1a890620dac404ac53b6738fda66315f8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:25:37.849724 containerd[1974]: time="2025-05-17T00:25:37.849525658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-228,Uid:6fe575af699b14accfbe6fbd89850d73,Namespace:kube-system,Attempt:0,} returns sandbox id \"31ae02f252b8d2bf82ed57445631efed4715051458ef2a7a8333088346d0bc21\"" May 17 00:25:37.853314 containerd[1974]: time="2025-05-17T00:25:37.852971461Z" level=info msg="CreateContainer within sandbox \"31ae02f252b8d2bf82ed57445631efed4715051458ef2a7a8333088346d0bc21\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:25:37.895615 containerd[1974]: time="2025-05-17T00:25:37.895565084Z" level=info msg="CreateContainer within sandbox \"10dbb15031ee9db1ec4394fc4e0c2415afc61fc358f2e7d222a09a904501676a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"93137452b0ff0fdc42495b478ed0d47fb5b3b795dea21ad1f06df9e5b1681e42\"" May 17 00:25:37.896375 containerd[1974]: time="2025-05-17T00:25:37.896313224Z" level=info msg="StartContainer for \"93137452b0ff0fdc42495b478ed0d47fb5b3b795dea21ad1f06df9e5b1681e42\"" May 17 00:25:37.900712 containerd[1974]: time="2025-05-17T00:25:37.900606465Z" level=info msg="CreateContainer within sandbox \"31ae02f252b8d2bf82ed57445631efed4715051458ef2a7a8333088346d0bc21\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b09fc4ee6d20fd18e89c6ca3a8843fada1644039e3dc7008d1595208e2fcff9b\"" May 17 00:25:37.901943 containerd[1974]: time="2025-05-17T00:25:37.901161973Z" level=info msg="StartContainer for \"b09fc4ee6d20fd18e89c6ca3a8843fada1644039e3dc7008d1595208e2fcff9b\"" May 17 00:25:37.904446 containerd[1974]: time="2025-05-17T00:25:37.902190904Z" level=info msg="CreateContainer within sandbox \"1298a3a140b59d78ce99026de9bf6be1a890620dac404ac53b6738fda66315f8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55a075ed4342b8f089ea420e3c945036935bb00ec94429441eb8b9ee6f5b47ac\"" May 17 00:25:37.904895 containerd[1974]: time="2025-05-17T00:25:37.904869783Z" level=info msg="StartContainer for \"55a075ed4342b8f089ea420e3c945036935bb00ec94429441eb8b9ee6f5b47ac\"" May 17 00:25:37.937039 systemd[1]: Started cri-containerd-b09fc4ee6d20fd18e89c6ca3a8843fada1644039e3dc7008d1595208e2fcff9b.scope - libcontainer container b09fc4ee6d20fd18e89c6ca3a8843fada1644039e3dc7008d1595208e2fcff9b. May 17 00:25:37.944607 systemd[1]: Started cri-containerd-55a075ed4342b8f089ea420e3c945036935bb00ec94429441eb8b9ee6f5b47ac.scope - libcontainer container 55a075ed4342b8f089ea420e3c945036935bb00ec94429441eb8b9ee6f5b47ac. May 17 00:25:37.946086 systemd[1]: Started cri-containerd-93137452b0ff0fdc42495b478ed0d47fb5b3b795dea21ad1f06df9e5b1681e42.scope - libcontainer container 93137452b0ff0fdc42495b478ed0d47fb5b3b795dea21ad1f06df9e5b1681e42. May 17 00:25:37.975682 kubelet[2816]: E0517 00:25:37.975626 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-228?timeout=10s\": dial tcp 172.31.23.228:6443: connect: connection refused" interval="1.6s" May 17 00:25:38.015059 containerd[1974]: time="2025-05-17T00:25:38.015021274Z" level=info msg="StartContainer for \"b09fc4ee6d20fd18e89c6ca3a8843fada1644039e3dc7008d1595208e2fcff9b\" returns successfully" May 17 00:25:38.028358 kubelet[2816]: W0517 00:25:38.028294 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.228:6443: connect: connection refused May 17 00:25:38.028570 kubelet[2816]: E0517 00:25:38.028552 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:38.029707 containerd[1974]: time="2025-05-17T00:25:38.029573546Z" level=info msg="StartContainer for \"93137452b0ff0fdc42495b478ed0d47fb5b3b795dea21ad1f06df9e5b1681e42\" returns successfully" May 17 00:25:38.029707 containerd[1974]: time="2025-05-17T00:25:38.029658708Z" level=info msg="StartContainer for \"55a075ed4342b8f089ea420e3c945036935bb00ec94429441eb8b9ee6f5b47ac\" returns successfully" May 17 00:25:38.104360 kubelet[2816]: W0517 00:25:38.104272 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.228:6443: connect: connection refused May 17 00:25:38.104360 kubelet[2816]: E0517 00:25:38.104328 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:38.166525 kubelet[2816]: I0517 00:25:38.164213 2816 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-228" May 17 00:25:38.166861 kubelet[2816]: E0517 00:25:38.166833 2816 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.228:6443/api/v1/nodes\": dial tcp 172.31.23.228:6443: connect: connection refused" node="ip-172-31-23-228" May 17 00:25:38.573448 kubelet[2816]: E0517 00:25:38.571676 2816 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.228:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:25:39.769761 kubelet[2816]: I0517 00:25:39.769117 2816 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-228" May 17 00:25:40.759925 kubelet[2816]: E0517 00:25:40.759881 2816 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-228\" not found" node="ip-172-31-23-228" May 17 00:25:40.899848 kubelet[2816]: I0517 00:25:40.899772 2816 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-228" May 17 00:25:41.279031 kubelet[2816]: E0517 00:25:41.278832 2816 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-23-228\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-228" May 17 00:25:41.547116 kubelet[2816]: I0517 00:25:41.547081 2816 apiserver.go:52] "Watching apiserver" May 17 00:25:41.568640 kubelet[2816]: I0517 00:25:41.568607 2816 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:25:42.991867 systemd[1]: Reloading requested from client PID 3086 ('systemctl') (unit session-9.scope)... May 17 00:25:42.991884 systemd[1]: Reloading... May 17 00:25:43.076448 zram_generator::config[3122]: No configuration found. May 17 00:25:43.224414 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:25:43.326354 systemd[1]: Reloading finished in 333 ms. May 17 00:25:43.365157 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:43.378318 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:25:43.378625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:43.384939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:25:43.674075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:25:43.679545 (kubelet)[3186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:25:43.736109 kubelet[3186]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:25:43.736492 kubelet[3186]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:25:43.736546 kubelet[3186]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:25:43.736752 kubelet[3186]: I0517 00:25:43.736724 3186 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:25:43.747863 kubelet[3186]: I0517 00:25:43.747825 3186 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:25:43.747863 kubelet[3186]: I0517 00:25:43.747854 3186 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:25:43.748117 kubelet[3186]: I0517 00:25:43.748102 3186 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:25:43.749446 kubelet[3186]: I0517 00:25:43.749405 3186 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:25:43.754087 kubelet[3186]: I0517 00:25:43.753929 3186 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:25:43.757569 kubelet[3186]: E0517 00:25:43.757450 3186 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:25:43.757569 kubelet[3186]: I0517 00:25:43.757478 3186 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:25:43.761982 kubelet[3186]: I0517 00:25:43.761892 3186 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:25:43.763495 kubelet[3186]: I0517 00:25:43.763294 3186 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:25:43.766383 kubelet[3186]: I0517 00:25:43.766336 3186 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:25:43.766714 kubelet[3186]: I0517 00:25:43.766520 3186 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-228","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:25:43.766714 kubelet[3186]: I0517 00:25:43.766708 3186 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:25:43.766714 kubelet[3186]: I0517 00:25:43.766719 3186 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:25:43.766913 kubelet[3186]: I0517 00:25:43.766750 3186 state_mem.go:36] "Initialized new in-memory state store" May 17 00:25:43.766913 kubelet[3186]: I0517 00:25:43.766851 3186 kubelet.go:408] "Attempting to sync node with API server" May 17 00:25:43.766913 kubelet[3186]: I0517 00:25:43.766861 3186 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:25:43.768242 kubelet[3186]: I0517 00:25:43.768048 3186 kubelet.go:314] "Adding apiserver pod source" May 17 00:25:43.768242 kubelet[3186]: I0517 00:25:43.768070 3186 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:25:43.769474 kubelet[3186]: I0517 00:25:43.769393 3186 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:25:43.770315 kubelet[3186]: I0517 00:25:43.770297 3186 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:25:43.771277 kubelet[3186]: I0517 00:25:43.771256 3186 server.go:1274] "Started kubelet" May 17 00:25:43.774806 kubelet[3186]: I0517 00:25:43.774680 3186 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:25:43.783839 kubelet[3186]: I0517 00:25:43.782743 3186 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:25:43.783839 kubelet[3186]: I0517 00:25:43.783597 3186 server.go:449] "Adding debug handlers to kubelet server" May 17 00:25:43.784546 kubelet[3186]: I0517 00:25:43.784418 3186 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:25:43.784705 kubelet[3186]: I0517 00:25:43.784685 3186 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:25:43.784920 kubelet[3186]: I0517 00:25:43.784903 3186 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:25:43.786242 kubelet[3186]: I0517 00:25:43.786213 3186 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:25:43.786518 kubelet[3186]: E0517 00:25:43.786477 3186 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-228\" not found" May 17 00:25:43.788714 kubelet[3186]: I0517 00:25:43.788698 3186 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:25:43.789154 kubelet[3186]: I0517 00:25:43.788941 3186 reconciler.go:26] "Reconciler: start to sync state" May 17 00:25:43.792131 kubelet[3186]: I0517 00:25:43.792011 3186 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:25:43.793189 kubelet[3186]: I0517 00:25:43.793171 3186 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:25:43.793497 kubelet[3186]: I0517 00:25:43.793278 3186 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:25:43.793497 kubelet[3186]: I0517 00:25:43.793300 3186 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:25:43.793497 kubelet[3186]: E0517 00:25:43.793341 3186 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:25:43.803243 kubelet[3186]: I0517 00:25:43.803222 3186 factory.go:221] Registration of the systemd container factory successfully May 17 00:25:43.803493 kubelet[3186]: I0517 00:25:43.803476 3186 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:25:43.808459 kubelet[3186]: I0517 00:25:43.807058 3186 factory.go:221] Registration of the containerd container factory successfully May 17 00:25:43.875916 kubelet[3186]: I0517 00:25:43.875885 3186 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:25:43.875916 kubelet[3186]: I0517 00:25:43.875904 3186 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:25:43.875916 kubelet[3186]: I0517 00:25:43.875921 3186 state_mem.go:36] "Initialized new in-memory state store" May 17 00:25:43.876103 kubelet[3186]: I0517 00:25:43.876064 3186 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:25:43.876103 kubelet[3186]: I0517 00:25:43.876074 3186 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:25:43.876103 kubelet[3186]: I0517 00:25:43.876091 3186 policy_none.go:49] "None policy: Start" May 17 00:25:43.876848 kubelet[3186]: I0517 00:25:43.876825 3186 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:25:43.876848 kubelet[3186]: I0517 00:25:43.876847 3186 state_mem.go:35] "Initializing new in-memory state store" May 17 00:25:43.877004 kubelet[3186]: I0517 00:25:43.876989 3186 state_mem.go:75] "Updated machine memory state" May 17 00:25:43.881055 kubelet[3186]: I0517 00:25:43.881033 3186 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:25:43.882547 kubelet[3186]: I0517 00:25:43.881845 3186 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:25:43.882547 kubelet[3186]: I0517 00:25:43.881859 3186 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:25:43.882658 kubelet[3186]: I0517 00:25:43.882630 3186 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:25:43.905395 kubelet[3186]: E0517 00:25:43.905348 3186 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-228\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-228" May 17 00:25:43.906750 kubelet[3186]: E0517 00:25:43.906720 3186 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-228\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-228" May 17 00:25:43.985395 kubelet[3186]: I0517 00:25:43.985023 3186 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-228" May 17 00:25:43.990499 kubelet[3186]: I0517 00:25:43.990465 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fe575af699b14accfbe6fbd89850d73-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-228\" (UID: \"6fe575af699b14accfbe6fbd89850d73\") " pod="kube-system/kube-apiserver-ip-172-31-23-228" May 17 00:25:43.990608 kubelet[3186]: I0517 00:25:43.990519 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57fbf7c520c86adf73736250868a624b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-228\" (UID: \"57fbf7c520c86adf73736250868a624b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-228" May 17 00:25:43.990608 kubelet[3186]: I0517 00:25:43.990539 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57fbf7c520c86adf73736250868a624b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-228\" (UID: \"57fbf7c520c86adf73736250868a624b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-228" May 17 00:25:43.990608 kubelet[3186]: I0517 00:25:43.990558 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57fbf7c520c86adf73736250868a624b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-228\" (UID: \"57fbf7c520c86adf73736250868a624b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-228" May 17 00:25:43.990608 kubelet[3186]: I0517 00:25:43.990575 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e546bd5e3899b87f8d9bf69b6a73d679-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-228\" (UID: \"e546bd5e3899b87f8d9bf69b6a73d679\") " pod="kube-system/kube-scheduler-ip-172-31-23-228" May 17 00:25:43.990608 kubelet[3186]: I0517 00:25:43.990589 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fe575af699b14accfbe6fbd89850d73-ca-certs\") pod \"kube-apiserver-ip-172-31-23-228\" (UID: \"6fe575af699b14accfbe6fbd89850d73\") " pod="kube-system/kube-apiserver-ip-172-31-23-228" May 17 00:25:43.990740 kubelet[3186]: I0517 00:25:43.990603 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fe575af699b14accfbe6fbd89850d73-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-228\" (UID: \"6fe575af699b14accfbe6fbd89850d73\") " pod="kube-system/kube-apiserver-ip-172-31-23-228" May 17 00:25:43.990740 kubelet[3186]: I0517 00:25:43.990617 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57fbf7c520c86adf73736250868a624b-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-228\" (UID: \"57fbf7c520c86adf73736250868a624b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-228" May 17 00:25:43.990740 kubelet[3186]: I0517 00:25:43.990644 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/57fbf7c520c86adf73736250868a624b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-228\" (UID: \"57fbf7c520c86adf73736250868a624b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-228" May 17 00:25:43.995043 kubelet[3186]: I0517 00:25:43.994776 3186 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-23-228" May 17 00:25:43.995043 kubelet[3186]: I0517 00:25:43.994837 3186 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-228" May 17 00:25:44.769638 kubelet[3186]: I0517 00:25:44.769599 3186 apiserver.go:52] "Watching apiserver" May 17 00:25:44.789671 kubelet[3186]: I0517 00:25:44.789621 3186 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:25:44.864278 kubelet[3186]: E0517 00:25:44.864243 3186 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-228\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-228" May 17 00:25:44.882623 kubelet[3186]: I0517 00:25:44.882316 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-228" podStartSLOduration=2.882300051 podStartE2EDuration="2.882300051s" podCreationTimestamp="2025-05-17 00:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:25:44.872069706 +0000 UTC m=+1.186392753" watchObservedRunningTime="2025-05-17 00:25:44.882300051 +0000 UTC m=+1.196623100" May 17 00:25:44.890717 kubelet[3186]: I0517 00:25:44.890669 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-228" podStartSLOduration=1.890655465 podStartE2EDuration="1.890655465s" podCreationTimestamp="2025-05-17 00:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:25:44.882527066 +0000 UTC m=+1.196850118" watchObservedRunningTime="2025-05-17 00:25:44.890655465 +0000 UTC m=+1.204978505" May 17 00:25:44.902465 kubelet[3186]: I0517 00:25:44.902181 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-228" podStartSLOduration=1.902162583 podStartE2EDuration="1.902162583s" podCreationTimestamp="2025-05-17 00:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:25:44.891027712 +0000 UTC m=+1.205350742" watchObservedRunningTime="2025-05-17 00:25:44.902162583 +0000 UTC m=+1.216485642" May 17 00:25:47.871950 update_engine[1956]: I20250517 00:25:47.871873 1956 update_attempter.cc:509] Updating boot flags... May 17 00:25:47.927146 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3243) May 17 00:25:48.069509 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3247) May 17 00:25:48.643636 kubelet[3186]: I0517 00:25:48.643603 3186 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:25:48.644012 containerd[1974]: time="2025-05-17T00:25:48.643911006Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:25:48.644214 kubelet[3186]: I0517 00:25:48.644153 3186 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:25:49.462651 systemd[1]: Created slice kubepods-besteffort-podde86c366_0fd7_4452_a0a9_7c478b1f2d1f.slice - libcontainer container kubepods-besteffort-podde86c366_0fd7_4452_a0a9_7c478b1f2d1f.slice. May 17 00:25:49.527468 kubelet[3186]: I0517 00:25:49.527400 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de86c366-0fd7-4452-a0a9-7c478b1f2d1f-xtables-lock\") pod \"kube-proxy-jn2zq\" (UID: \"de86c366-0fd7-4452-a0a9-7c478b1f2d1f\") " pod="kube-system/kube-proxy-jn2zq" May 17 00:25:49.527839 kubelet[3186]: I0517 00:25:49.527476 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de86c366-0fd7-4452-a0a9-7c478b1f2d1f-kube-proxy\") pod \"kube-proxy-jn2zq\" (UID: \"de86c366-0fd7-4452-a0a9-7c478b1f2d1f\") " pod="kube-system/kube-proxy-jn2zq" May 17 00:25:49.527839 kubelet[3186]: I0517 00:25:49.527506 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de86c366-0fd7-4452-a0a9-7c478b1f2d1f-lib-modules\") pod \"kube-proxy-jn2zq\" (UID: \"de86c366-0fd7-4452-a0a9-7c478b1f2d1f\") " pod="kube-system/kube-proxy-jn2zq" May 17 00:25:49.527839 kubelet[3186]: I0517 00:25:49.527544 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxtg9\" (UniqueName: \"kubernetes.io/projected/de86c366-0fd7-4452-a0a9-7c478b1f2d1f-kube-api-access-sxtg9\") pod \"kube-proxy-jn2zq\" (UID: \"de86c366-0fd7-4452-a0a9-7c478b1f2d1f\") " pod="kube-system/kube-proxy-jn2zq" May 17 00:25:49.620165 systemd[1]: Created slice kubepods-besteffort-pod902d38c4_38cf_488d_bd11_2abf675574c6.slice - libcontainer container kubepods-besteffort-pod902d38c4_38cf_488d_bd11_2abf675574c6.slice. May 17 00:25:49.628520 kubelet[3186]: I0517 00:25:49.628479 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/902d38c4-38cf-488d-bd11-2abf675574c6-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-jwnrk\" (UID: \"902d38c4-38cf-488d-bd11-2abf675574c6\") " pod="tigera-operator/tigera-operator-7c5755cdcb-jwnrk" May 17 00:25:49.628651 kubelet[3186]: I0517 00:25:49.628613 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lpjf\" (UniqueName: \"kubernetes.io/projected/902d38c4-38cf-488d-bd11-2abf675574c6-kube-api-access-4lpjf\") pod \"tigera-operator-7c5755cdcb-jwnrk\" (UID: \"902d38c4-38cf-488d-bd11-2abf675574c6\") " pod="tigera-operator/tigera-operator-7c5755cdcb-jwnrk" May 17 00:25:49.771154 containerd[1974]: time="2025-05-17T00:25:49.771043566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jn2zq,Uid:de86c366-0fd7-4452-a0a9-7c478b1f2d1f,Namespace:kube-system,Attempt:0,}" May 17 00:25:49.802884 containerd[1974]: time="2025-05-17T00:25:49.802792420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:49.802884 containerd[1974]: time="2025-05-17T00:25:49.802838982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:49.802884 containerd[1974]: time="2025-05-17T00:25:49.802850177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:49.804565 containerd[1974]: time="2025-05-17T00:25:49.803130577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:49.831630 systemd[1]: Started cri-containerd-2bd7eafd1303205de9c5191d4b84ba3c1466a6fc0e4ca412896e768ad46b582a.scope - libcontainer container 2bd7eafd1303205de9c5191d4b84ba3c1466a6fc0e4ca412896e768ad46b582a. May 17 00:25:49.858485 containerd[1974]: time="2025-05-17T00:25:49.858346337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jn2zq,Uid:de86c366-0fd7-4452-a0a9-7c478b1f2d1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bd7eafd1303205de9c5191d4b84ba3c1466a6fc0e4ca412896e768ad46b582a\"" May 17 00:25:49.861664 containerd[1974]: time="2025-05-17T00:25:49.861520875Z" level=info msg="CreateContainer within sandbox \"2bd7eafd1303205de9c5191d4b84ba3c1466a6fc0e4ca412896e768ad46b582a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:25:49.901327 containerd[1974]: time="2025-05-17T00:25:49.901280618Z" level=info msg="CreateContainer within sandbox \"2bd7eafd1303205de9c5191d4b84ba3c1466a6fc0e4ca412896e768ad46b582a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"def64f4c5bfd274db440f8dd332de01974054faf5b19efb983ca6cd04a709c67\"" May 17 00:25:49.903215 containerd[1974]: time="2025-05-17T00:25:49.902497564Z" level=info msg="StartContainer for \"def64f4c5bfd274db440f8dd332de01974054faf5b19efb983ca6cd04a709c67\"" May 17 00:25:49.924101 containerd[1974]: time="2025-05-17T00:25:49.924061370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-jwnrk,Uid:902d38c4-38cf-488d-bd11-2abf675574c6,Namespace:tigera-operator,Attempt:0,}" May 17 00:25:49.929616 systemd[1]: Started cri-containerd-def64f4c5bfd274db440f8dd332de01974054faf5b19efb983ca6cd04a709c67.scope - libcontainer container def64f4c5bfd274db440f8dd332de01974054faf5b19efb983ca6cd04a709c67. May 17 00:25:49.972535 containerd[1974]: time="2025-05-17T00:25:49.972402506Z" level=info msg="StartContainer for \"def64f4c5bfd274db440f8dd332de01974054faf5b19efb983ca6cd04a709c67\" returns successfully" May 17 00:25:49.972858 containerd[1974]: time="2025-05-17T00:25:49.972201388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:25:49.972858 containerd[1974]: time="2025-05-17T00:25:49.972286497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:25:49.972858 containerd[1974]: time="2025-05-17T00:25:49.972306874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:49.972858 containerd[1974]: time="2025-05-17T00:25:49.972414600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:25:49.998617 systemd[1]: Started cri-containerd-91ff33bd9e4da250988c6d1a6719be933d357cffc738f971f10ffed086824703.scope - libcontainer container 91ff33bd9e4da250988c6d1a6719be933d357cffc738f971f10ffed086824703. May 17 00:25:50.049417 containerd[1974]: time="2025-05-17T00:25:50.049376329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-jwnrk,Uid:902d38c4-38cf-488d-bd11-2abf675574c6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"91ff33bd9e4da250988c6d1a6719be933d357cffc738f971f10ffed086824703\"" May 17 00:25:50.051648 containerd[1974]: time="2025-05-17T00:25:50.051610762Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:25:50.885530 kubelet[3186]: I0517 00:25:50.885044 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jn2zq" podStartSLOduration=1.8850258819999999 podStartE2EDuration="1.885025882s" podCreationTimestamp="2025-05-17 00:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:25:50.883721413 +0000 UTC m=+7.198044461" watchObservedRunningTime="2025-05-17 00:25:50.885025882 +0000 UTC m=+7.199348932" May 17 00:25:51.500057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725381872.mount: Deactivated successfully. May 17 00:25:52.380183 containerd[1974]: time="2025-05-17T00:25:52.380131260Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:52.381282 containerd[1974]: time="2025-05-17T00:25:52.381217319Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:25:52.382864 containerd[1974]: time="2025-05-17T00:25:52.382807233Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:52.385450 containerd[1974]: time="2025-05-17T00:25:52.385376558Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:25:52.386691 containerd[1974]: time="2025-05-17T00:25:52.386016927Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.334366864s" May 17 00:25:52.386691 containerd[1974]: time="2025-05-17T00:25:52.386133792Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:25:52.387818 containerd[1974]: time="2025-05-17T00:25:52.387789256Z" level=info msg="CreateContainer within sandbox \"91ff33bd9e4da250988c6d1a6719be933d357cffc738f971f10ffed086824703\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:25:52.407521 containerd[1974]: time="2025-05-17T00:25:52.407452140Z" level=info msg="CreateContainer within sandbox \"91ff33bd9e4da250988c6d1a6719be933d357cffc738f971f10ffed086824703\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b\"" May 17 00:25:52.408150 containerd[1974]: time="2025-05-17T00:25:52.408068901Z" level=info msg="StartContainer for \"072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b\"" May 17 00:25:52.435829 systemd[1]: run-containerd-runc-k8s.io-072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b-runc.PQowIA.mount: Deactivated successfully. May 17 00:25:52.445682 systemd[1]: Started cri-containerd-072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b.scope - libcontainer container 072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b. May 17 00:25:52.473799 containerd[1974]: time="2025-05-17T00:25:52.473760010Z" level=info msg="StartContainer for \"072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b\" returns successfully" May 17 00:25:52.900933 kubelet[3186]: I0517 00:25:52.900865 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-jwnrk" podStartSLOduration=1.565063333 podStartE2EDuration="3.900848098s" podCreationTimestamp="2025-05-17 00:25:49 +0000 UTC" firstStartedPulling="2025-05-17 00:25:50.051069542 +0000 UTC m=+6.365392574" lastFinishedPulling="2025-05-17 00:25:52.386854307 +0000 UTC m=+8.701177339" observedRunningTime="2025-05-17 00:25:52.900829836 +0000 UTC m=+9.215152886" watchObservedRunningTime="2025-05-17 00:25:52.900848098 +0000 UTC m=+9.215171146" May 17 00:25:55.762672 systemd[1]: cri-containerd-072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b.scope: Deactivated successfully. May 17 00:25:55.857226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b-rootfs.mount: Deactivated successfully. May 17 00:25:56.096451 containerd[1974]: time="2025-05-17T00:25:56.017921254Z" level=info msg="shim disconnected" id=072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b namespace=k8s.io May 17 00:25:56.096451 containerd[1974]: time="2025-05-17T00:25:56.095482132Z" level=warning msg="cleaning up after shim disconnected" id=072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b namespace=k8s.io May 17 00:25:56.096451 containerd[1974]: time="2025-05-17T00:25:56.095503042Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:25:56.914110 kubelet[3186]: I0517 00:25:56.914071 3186 scope.go:117] "RemoveContainer" containerID="072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b" May 17 00:25:56.948896 containerd[1974]: time="2025-05-17T00:25:56.948626197Z" level=info msg="CreateContainer within sandbox \"91ff33bd9e4da250988c6d1a6719be933d357cffc738f971f10ffed086824703\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 17 00:25:56.988384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759723508.mount: Deactivated successfully. May 17 00:25:56.995074 containerd[1974]: time="2025-05-17T00:25:56.995021500Z" level=info msg="CreateContainer within sandbox \"91ff33bd9e4da250988c6d1a6719be933d357cffc738f971f10ffed086824703\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972\"" May 17 00:25:57.000084 containerd[1974]: time="2025-05-17T00:25:56.999895585Z" level=info msg="StartContainer for \"1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972\"" May 17 00:25:57.067691 systemd[1]: Started cri-containerd-1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972.scope - libcontainer container 1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972. May 17 00:25:57.098154 containerd[1974]: time="2025-05-17T00:25:57.097986878Z" level=info msg="StartContainer for \"1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972\" returns successfully" May 17 00:25:59.128277 sudo[2331]: pam_unix(sudo:session): session closed for user root May 17 00:25:59.151247 sshd[2317]: pam_unix(sshd:session): session closed for user core May 17 00:25:59.153920 systemd[1]: sshd@8-172.31.23.228:22-147.75.109.163:41008.service: Deactivated successfully. May 17 00:25:59.156066 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:25:59.156224 systemd[1]: session-9.scope: Consumed 5.329s CPU time, 142.0M memory peak, 0B memory swap peak. May 17 00:25:59.157439 systemd-logind[1955]: Session 9 logged out. Waiting for processes to exit. May 17 00:25:59.159226 systemd-logind[1955]: Removed session 9. May 17 00:26:05.559926 systemd[1]: Created slice kubepods-besteffort-pod70edea70_0ff9_4a06_9e30_1fffa38ae98b.slice - libcontainer container kubepods-besteffort-pod70edea70_0ff9_4a06_9e30_1fffa38ae98b.slice. May 17 00:26:05.650295 kubelet[3186]: I0517 00:26:05.650250 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzmt\" (UniqueName: \"kubernetes.io/projected/70edea70-0ff9-4a06-9e30-1fffa38ae98b-kube-api-access-ztzmt\") pod \"calico-typha-6dff8975f-d687q\" (UID: \"70edea70-0ff9-4a06-9e30-1fffa38ae98b\") " pod="calico-system/calico-typha-6dff8975f-d687q" May 17 00:26:05.651013 kubelet[3186]: I0517 00:26:05.650810 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70edea70-0ff9-4a06-9e30-1fffa38ae98b-tigera-ca-bundle\") pod \"calico-typha-6dff8975f-d687q\" (UID: \"70edea70-0ff9-4a06-9e30-1fffa38ae98b\") " pod="calico-system/calico-typha-6dff8975f-d687q" May 17 00:26:05.651013 kubelet[3186]: I0517 00:26:05.650844 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/70edea70-0ff9-4a06-9e30-1fffa38ae98b-typha-certs\") pod \"calico-typha-6dff8975f-d687q\" (UID: \"70edea70-0ff9-4a06-9e30-1fffa38ae98b\") " pod="calico-system/calico-typha-6dff8975f-d687q" May 17 00:26:05.879709 containerd[1974]: time="2025-05-17T00:26:05.878898068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dff8975f-d687q,Uid:70edea70-0ff9-4a06-9e30-1fffa38ae98b,Namespace:calico-system,Attempt:0,}" May 17 00:26:05.932699 containerd[1974]: time="2025-05-17T00:26:05.930838793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:05.931822 systemd[1]: Created slice kubepods-besteffort-pod7b2c6329_f8ca_41a2_995d_f839cda13edc.slice - libcontainer container kubepods-besteffort-pod7b2c6329_f8ca_41a2_995d_f839cda13edc.slice. May 17 00:26:05.932896 containerd[1974]: time="2025-05-17T00:26:05.932490779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:05.932896 containerd[1974]: time="2025-05-17T00:26:05.932553928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:05.932896 containerd[1974]: time="2025-05-17T00:26:05.932646576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:05.952955 kubelet[3186]: I0517 00:26:05.952929 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b2c6329-f8ca-41a2-995d-f839cda13edc-lib-modules\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.953538 kubelet[3186]: I0517 00:26:05.953463 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2c6329-f8ca-41a2-995d-f839cda13edc-tigera-ca-bundle\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.953538 kubelet[3186]: I0517 00:26:05.953501 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7b2c6329-f8ca-41a2-995d-f839cda13edc-var-run-calico\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.953538 kubelet[3186]: I0517 00:26:05.953519 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7b2c6329-f8ca-41a2-995d-f839cda13edc-cni-net-dir\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.953538 kubelet[3186]: I0517 00:26:05.953538 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7b2c6329-f8ca-41a2-995d-f839cda13edc-cni-log-dir\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.953538 kubelet[3186]: I0517 00:26:05.953557 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7b2c6329-f8ca-41a2-995d-f839cda13edc-flexvol-driver-host\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.953755 kubelet[3186]: I0517 00:26:05.953574 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7b2c6329-f8ca-41a2-995d-f839cda13edc-node-certs\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.953755 kubelet[3186]: I0517 00:26:05.953592 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7b2c6329-f8ca-41a2-995d-f839cda13edc-policysync\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.953755 kubelet[3186]: I0517 00:26:05.953605 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b2c6329-f8ca-41a2-995d-f839cda13edc-xtables-lock\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.953755 kubelet[3186]: I0517 00:26:05.953622 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b2c6329-f8ca-41a2-995d-f839cda13edc-var-lib-calico\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.953755 kubelet[3186]: I0517 00:26:05.953638 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6wb\" (UniqueName: \"kubernetes.io/projected/7b2c6329-f8ca-41a2-995d-f839cda13edc-kube-api-access-7c6wb\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.953888 kubelet[3186]: I0517 00:26:05.953654 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7b2c6329-f8ca-41a2-995d-f839cda13edc-cni-bin-dir\") pod \"calico-node-t2x7c\" (UID: \"7b2c6329-f8ca-41a2-995d-f839cda13edc\") " pod="calico-system/calico-node-t2x7c" May 17 00:26:05.969685 systemd[1]: Started cri-containerd-bdde645d4dc8bece8dc870bfe15fa2d4943bf7f67e0e860f6ea33f060d536367.scope - libcontainer container bdde645d4dc8bece8dc870bfe15fa2d4943bf7f67e0e860f6ea33f060d536367. May 17 00:26:06.067476 kubelet[3186]: E0517 00:26:06.067304 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.067476 kubelet[3186]: W0517 00:26:06.067332 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.067476 kubelet[3186]: E0517 00:26:06.067384 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.074748 containerd[1974]: time="2025-05-17T00:26:06.074707257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dff8975f-d687q,Uid:70edea70-0ff9-4a06-9e30-1fffa38ae98b,Namespace:calico-system,Attempt:0,} returns sandbox id \"bdde645d4dc8bece8dc870bfe15fa2d4943bf7f67e0e860f6ea33f060d536367\"" May 17 00:26:06.076316 kubelet[3186]: E0517 00:26:06.076289 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.076316 kubelet[3186]: W0517 00:26:06.076309 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.076537 kubelet[3186]: E0517 00:26:06.076331 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.083250 containerd[1974]: time="2025-05-17T00:26:06.083222104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:26:06.167763 kubelet[3186]: E0517 00:26:06.167447 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gndq" podUID="d1caf04b-d279-4556-9507-efceb97ef03e" May 17 00:26:06.225394 kubelet[3186]: E0517 00:26:06.225348 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.225394 kubelet[3186]: W0517 00:26:06.225372 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.225394 kubelet[3186]: E0517 00:26:06.225396 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.225799 kubelet[3186]: E0517 00:26:06.225782 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.225799 kubelet[3186]: W0517 00:26:06.225794 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.226106 kubelet[3186]: E0517 00:26:06.225804 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.226206 kubelet[3186]: E0517 00:26:06.226184 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.226238 kubelet[3186]: W0517 00:26:06.226219 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.226266 kubelet[3186]: E0517 00:26:06.226236 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.226505 kubelet[3186]: E0517 00:26:06.226491 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.226505 kubelet[3186]: W0517 00:26:06.226503 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.226577 kubelet[3186]: E0517 00:26:06.226524 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.226736 kubelet[3186]: E0517 00:26:06.226723 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.226736 kubelet[3186]: W0517 00:26:06.226733 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.226794 kubelet[3186]: E0517 00:26:06.226742 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.226938 kubelet[3186]: E0517 00:26:06.226924 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.226938 kubelet[3186]: W0517 00:26:06.226935 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.226998 kubelet[3186]: E0517 00:26:06.226942 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.227197 kubelet[3186]: E0517 00:26:06.227173 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.227236 kubelet[3186]: W0517 00:26:06.227195 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.227236 kubelet[3186]: E0517 00:26:06.227212 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.227483 kubelet[3186]: E0517 00:26:06.227467 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.227483 kubelet[3186]: W0517 00:26:06.227479 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.227561 kubelet[3186]: E0517 00:26:06.227489 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.228571 kubelet[3186]: E0517 00:26:06.228483 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.228571 kubelet[3186]: W0517 00:26:06.228512 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.228571 kubelet[3186]: E0517 00:26:06.228524 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.228945 kubelet[3186]: E0517 00:26:06.228907 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.228945 kubelet[3186]: W0517 00:26:06.228919 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.228945 kubelet[3186]: E0517 00:26:06.228929 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.229553 kubelet[3186]: E0517 00:26:06.229532 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.229553 kubelet[3186]: W0517 00:26:06.229544 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.229674 kubelet[3186]: E0517 00:26:06.229567 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.230056 kubelet[3186]: E0517 00:26:06.230037 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.230056 kubelet[3186]: W0517 00:26:06.230050 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.230174 kubelet[3186]: E0517 00:26:06.230062 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.231060 kubelet[3186]: E0517 00:26:06.230948 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.231060 kubelet[3186]: W0517 00:26:06.230962 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.231060 kubelet[3186]: E0517 00:26:06.230973 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.231320 kubelet[3186]: E0517 00:26:06.231236 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.231320 kubelet[3186]: W0517 00:26:06.231247 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.231320 kubelet[3186]: E0517 00:26:06.231257 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.231520 kubelet[3186]: E0517 00:26:06.231503 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.231520 kubelet[3186]: W0517 00:26:06.231513 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.231520 kubelet[3186]: E0517 00:26:06.231523 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.231851 kubelet[3186]: E0517 00:26:06.231813 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.231851 kubelet[3186]: W0517 00:26:06.231826 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.231851 kubelet[3186]: E0517 00:26:06.231835 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.232517 kubelet[3186]: E0517 00:26:06.232500 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.232517 kubelet[3186]: W0517 00:26:06.232514 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.232691 kubelet[3186]: E0517 00:26:06.232525 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.232852 kubelet[3186]: E0517 00:26:06.232839 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.233483 kubelet[3186]: W0517 00:26:06.232860 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.233483 kubelet[3186]: E0517 00:26:06.232870 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.233632 kubelet[3186]: E0517 00:26:06.233617 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.233632 kubelet[3186]: W0517 00:26:06.233632 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.233693 kubelet[3186]: E0517 00:26:06.233643 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.234094 kubelet[3186]: E0517 00:26:06.234075 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.234094 kubelet[3186]: W0517 00:26:06.234094 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.234182 kubelet[3186]: E0517 00:26:06.234104 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.246738 containerd[1974]: time="2025-05-17T00:26:06.246678043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t2x7c,Uid:7b2c6329-f8ca-41a2-995d-f839cda13edc,Namespace:calico-system,Attempt:0,}" May 17 00:26:06.257179 kubelet[3186]: E0517 00:26:06.256705 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.257179 kubelet[3186]: W0517 00:26:06.256728 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.257179 kubelet[3186]: E0517 00:26:06.256749 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.257179 kubelet[3186]: I0517 00:26:06.256778 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1caf04b-d279-4556-9507-efceb97ef03e-kubelet-dir\") pod \"csi-node-driver-4gndq\" (UID: \"d1caf04b-d279-4556-9507-efceb97ef03e\") " pod="calico-system/csi-node-driver-4gndq" May 17 00:26:06.257884 kubelet[3186]: E0517 00:26:06.257662 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.257884 kubelet[3186]: W0517 00:26:06.257678 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.257884 kubelet[3186]: E0517 00:26:06.257706 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.257884 kubelet[3186]: I0517 00:26:06.257728 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d1caf04b-d279-4556-9507-efceb97ef03e-registration-dir\") pod \"csi-node-driver-4gndq\" (UID: \"d1caf04b-d279-4556-9507-efceb97ef03e\") " pod="calico-system/csi-node-driver-4gndq" May 17 00:26:06.258866 kubelet[3186]: E0517 00:26:06.258450 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.258866 kubelet[3186]: W0517 00:26:06.258463 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.258866 kubelet[3186]: E0517 00:26:06.258633 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.259136 kubelet[3186]: E0517 00:26:06.259019 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.259136 kubelet[3186]: W0517 00:26:06.259050 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.259136 kubelet[3186]: E0517 00:26:06.259082 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.259811 kubelet[3186]: E0517 00:26:06.259725 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.259811 kubelet[3186]: W0517 00:26:06.259737 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.259811 kubelet[3186]: E0517 00:26:06.259760 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.259811 kubelet[3186]: I0517 00:26:06.259780 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b22fn\" (UniqueName: \"kubernetes.io/projected/d1caf04b-d279-4556-9507-efceb97ef03e-kube-api-access-b22fn\") pod \"csi-node-driver-4gndq\" (UID: \"d1caf04b-d279-4556-9507-efceb97ef03e\") " pod="calico-system/csi-node-driver-4gndq" May 17 00:26:06.260273 kubelet[3186]: E0517 00:26:06.260162 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.260273 kubelet[3186]: W0517 00:26:06.260174 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.260273 kubelet[3186]: E0517 00:26:06.260193 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.260609 kubelet[3186]: E0517 00:26:06.260519 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.260609 kubelet[3186]: W0517 00:26:06.260529 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.260609 kubelet[3186]: E0517 00:26:06.260541 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.261124 kubelet[3186]: E0517 00:26:06.260984 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.261206 kubelet[3186]: W0517 00:26:06.261181 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.261378 kubelet[3186]: E0517 00:26:06.261332 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.261378 kubelet[3186]: I0517 00:26:06.261354 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d1caf04b-d279-4556-9507-efceb97ef03e-socket-dir\") pod \"csi-node-driver-4gndq\" (UID: \"d1caf04b-d279-4556-9507-efceb97ef03e\") " pod="calico-system/csi-node-driver-4gndq" May 17 00:26:06.261686 kubelet[3186]: E0517 00:26:06.261581 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.261686 kubelet[3186]: W0517 00:26:06.261592 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.261686 kubelet[3186]: E0517 00:26:06.261604 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.262579 kubelet[3186]: E0517 00:26:06.262126 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.262579 kubelet[3186]: W0517 00:26:06.262136 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.262579 kubelet[3186]: E0517 00:26:06.262165 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.262579 kubelet[3186]: E0517 00:26:06.262369 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.262579 kubelet[3186]: W0517 00:26:06.262382 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.262579 kubelet[3186]: E0517 00:26:06.262397 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.262987 kubelet[3186]: I0517 00:26:06.262415 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d1caf04b-d279-4556-9507-efceb97ef03e-varrun\") pod \"csi-node-driver-4gndq\" (UID: \"d1caf04b-d279-4556-9507-efceb97ef03e\") " pod="calico-system/csi-node-driver-4gndq" May 17 00:26:06.263170 kubelet[3186]: E0517 00:26:06.263069 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.263170 kubelet[3186]: W0517 00:26:06.263078 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.263170 kubelet[3186]: E0517 00:26:06.263094 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.263520 kubelet[3186]: E0517 00:26:06.263498 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.263520 kubelet[3186]: W0517 00:26:06.263509 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.263660 kubelet[3186]: E0517 00:26:06.263605 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.263883 kubelet[3186]: E0517 00:26:06.263833 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.263883 kubelet[3186]: W0517 00:26:06.263842 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.263883 kubelet[3186]: E0517 00:26:06.263851 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.264127 kubelet[3186]: E0517 00:26:06.264096 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.264127 kubelet[3186]: W0517 00:26:06.264105 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.264127 kubelet[3186]: E0517 00:26:06.264113 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.313833 containerd[1974]: time="2025-05-17T00:26:06.311830857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:06.314114 containerd[1974]: time="2025-05-17T00:26:06.314034090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:06.314114 containerd[1974]: time="2025-05-17T00:26:06.314091117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:06.314408 containerd[1974]: time="2025-05-17T00:26:06.314356314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:06.360012 systemd[1]: Started cri-containerd-cdf621924179ade85b5f7748f09b82c10817339a73bd22db337d104f8f78a04a.scope - libcontainer container cdf621924179ade85b5f7748f09b82c10817339a73bd22db337d104f8f78a04a. May 17 00:26:06.366910 kubelet[3186]: E0517 00:26:06.364883 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.366910 kubelet[3186]: W0517 00:26:06.364907 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.366910 kubelet[3186]: E0517 00:26:06.364930 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.366910 kubelet[3186]: E0517 00:26:06.365310 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.366910 kubelet[3186]: W0517 00:26:06.365324 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.366910 kubelet[3186]: E0517 00:26:06.365467 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.366910 kubelet[3186]: E0517 00:26:06.365823 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.366910 kubelet[3186]: W0517 00:26:06.365837 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.366910 kubelet[3186]: E0517 00:26:06.365855 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.366910 kubelet[3186]: E0517 00:26:06.366187 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.367418 kubelet[3186]: W0517 00:26:06.366215 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.367418 kubelet[3186]: E0517 00:26:06.366244 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.367418 kubelet[3186]: E0517 00:26:06.366566 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.367418 kubelet[3186]: W0517 00:26:06.366577 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.367418 kubelet[3186]: E0517 00:26:06.366628 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.368694 kubelet[3186]: E0517 00:26:06.368064 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.368694 kubelet[3186]: W0517 00:26:06.368090 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.368694 kubelet[3186]: E0517 00:26:06.368322 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.368694 kubelet[3186]: E0517 00:26:06.368419 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.368694 kubelet[3186]: W0517 00:26:06.368448 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.368694 kubelet[3186]: E0517 00:26:06.368531 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.369323 kubelet[3186]: E0517 00:26:06.369296 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.369323 kubelet[3186]: W0517 00:26:06.369312 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.370159 kubelet[3186]: E0517 00:26:06.369814 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.370159 kubelet[3186]: E0517 00:26:06.370109 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.370159 kubelet[3186]: W0517 00:26:06.370121 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.370364 kubelet[3186]: E0517 00:26:06.370205 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.370413 kubelet[3186]: E0517 00:26:06.370384 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.370413 kubelet[3186]: W0517 00:26:06.370394 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.370566 kubelet[3186]: E0517 00:26:06.370545 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.370973 kubelet[3186]: E0517 00:26:06.370822 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.370973 kubelet[3186]: W0517 00:26:06.370836 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.371144 kubelet[3186]: E0517 00:26:06.370984 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.371475 kubelet[3186]: E0517 00:26:06.371418 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.371777 kubelet[3186]: W0517 00:26:06.371563 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.371777 kubelet[3186]: E0517 00:26:06.371601 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.372278 kubelet[3186]: E0517 00:26:06.372260 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.372278 kubelet[3186]: W0517 00:26:06.372275 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.372388 kubelet[3186]: E0517 00:26:06.372375 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.372891 kubelet[3186]: E0517 00:26:06.372871 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.372891 kubelet[3186]: W0517 00:26:06.372891 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.373535 kubelet[3186]: E0517 00:26:06.373383 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.373701 kubelet[3186]: E0517 00:26:06.373646 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.373701 kubelet[3186]: W0517 00:26:06.373660 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.374878 kubelet[3186]: E0517 00:26:06.374834 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.375065 kubelet[3186]: E0517 00:26:06.375023 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.375065 kubelet[3186]: W0517 00:26:06.375039 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.375360 kubelet[3186]: E0517 00:26:06.375103 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.375360 kubelet[3186]: E0517 00:26:06.375331 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.375360 kubelet[3186]: W0517 00:26:06.375343 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.375583 kubelet[3186]: E0517 00:26:06.375457 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.376025 kubelet[3186]: E0517 00:26:06.376005 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.376104 kubelet[3186]: W0517 00:26:06.376026 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.376104 kubelet[3186]: E0517 00:26:06.376057 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.376491 kubelet[3186]: E0517 00:26:06.376414 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.376491 kubelet[3186]: W0517 00:26:06.376440 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.377188 kubelet[3186]: E0517 00:26:06.377018 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.377410 kubelet[3186]: E0517 00:26:06.377327 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.377410 kubelet[3186]: W0517 00:26:06.377342 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.377772 kubelet[3186]: E0517 00:26:06.377747 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.377888 kubelet[3186]: E0517 00:26:06.377869 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.377888 kubelet[3186]: W0517 00:26:06.377884 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.378227 kubelet[3186]: E0517 00:26:06.377899 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.379084 kubelet[3186]: E0517 00:26:06.379065 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.379084 kubelet[3186]: W0517 00:26:06.379081 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.379365 kubelet[3186]: E0517 00:26:06.379111 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.379614 kubelet[3186]: E0517 00:26:06.379594 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.379614 kubelet[3186]: W0517 00:26:06.379609 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.379807 kubelet[3186]: E0517 00:26:06.379787 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.380620 kubelet[3186]: E0517 00:26:06.380597 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.380620 kubelet[3186]: W0517 00:26:06.380615 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.380907 kubelet[3186]: E0517 00:26:06.380754 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.380980 kubelet[3186]: E0517 00:26:06.380905 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.380980 kubelet[3186]: W0517 00:26:06.380916 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.380980 kubelet[3186]: E0517 00:26:06.380929 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.392398 kubelet[3186]: E0517 00:26:06.392361 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:06.392880 kubelet[3186]: W0517 00:26:06.392487 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:06.392880 kubelet[3186]: E0517 00:26:06.392633 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:06.427339 containerd[1974]: time="2025-05-17T00:26:06.427121481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t2x7c,Uid:7b2c6329-f8ca-41a2-995d-f839cda13edc,Namespace:calico-system,Attempt:0,} returns sandbox id \"cdf621924179ade85b5f7748f09b82c10817339a73bd22db337d104f8f78a04a\"" May 17 00:26:07.504571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170347207.mount: Deactivated successfully. May 17 00:26:07.806636 kubelet[3186]: E0517 00:26:07.806532 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gndq" podUID="d1caf04b-d279-4556-9507-efceb97ef03e" May 17 00:26:08.468341 containerd[1974]: time="2025-05-17T00:26:08.468289731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:08.471134 containerd[1974]: time="2025-05-17T00:26:08.471072930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:26:08.475251 containerd[1974]: time="2025-05-17T00:26:08.474086159Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:08.482659 containerd[1974]: time="2025-05-17T00:26:08.481739732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:08.482659 containerd[1974]: time="2025-05-17T00:26:08.482505202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.399035445s" May 17 00:26:08.482659 containerd[1974]: time="2025-05-17T00:26:08.482543214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:26:08.483979 containerd[1974]: time="2025-05-17T00:26:08.483933611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:26:08.502559 containerd[1974]: time="2025-05-17T00:26:08.502514491Z" level=info msg="CreateContainer within sandbox \"bdde645d4dc8bece8dc870bfe15fa2d4943bf7f67e0e860f6ea33f060d536367\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:26:08.546483 containerd[1974]: time="2025-05-17T00:26:08.546409466Z" level=info msg="CreateContainer within sandbox \"bdde645d4dc8bece8dc870bfe15fa2d4943bf7f67e0e860f6ea33f060d536367\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"14ca80c735cfeba8df447f8c99a67968994eb448d4ab07e88c5ef7455064ccd3\"" May 17 00:26:08.547288 containerd[1974]: time="2025-05-17T00:26:08.547259089Z" level=info msg="StartContainer for \"14ca80c735cfeba8df447f8c99a67968994eb448d4ab07e88c5ef7455064ccd3\"" May 17 00:26:08.609885 systemd[1]: Started cri-containerd-14ca80c735cfeba8df447f8c99a67968994eb448d4ab07e88c5ef7455064ccd3.scope - libcontainer container 14ca80c735cfeba8df447f8c99a67968994eb448d4ab07e88c5ef7455064ccd3. May 17 00:26:08.659059 containerd[1974]: time="2025-05-17T00:26:08.658721492Z" level=info msg="StartContainer for \"14ca80c735cfeba8df447f8c99a67968994eb448d4ab07e88c5ef7455064ccd3\" returns successfully" May 17 00:26:08.965180 kubelet[3186]: E0517 00:26:08.964944 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.965180 kubelet[3186]: W0517 00:26:08.964974 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.965180 kubelet[3186]: E0517 00:26:08.965003 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.966665 kubelet[3186]: E0517 00:26:08.966501 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.966665 kubelet[3186]: W0517 00:26:08.966525 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.966665 kubelet[3186]: E0517 00:26:08.966548 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.969449 kubelet[3186]: E0517 00:26:08.967529 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.969449 kubelet[3186]: W0517 00:26:08.967550 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.969449 kubelet[3186]: E0517 00:26:08.967568 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.969866 kubelet[3186]: E0517 00:26:08.969846 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.969866 kubelet[3186]: W0517 00:26:08.969866 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.970016 kubelet[3186]: E0517 00:26:08.969883 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.970147 kubelet[3186]: E0517 00:26:08.970131 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.970203 kubelet[3186]: W0517 00:26:08.970148 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.970203 kubelet[3186]: E0517 00:26:08.970163 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.970380 kubelet[3186]: E0517 00:26:08.970364 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.970470 kubelet[3186]: W0517 00:26:08.970382 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.970470 kubelet[3186]: E0517 00:26:08.970395 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.971628 kubelet[3186]: E0517 00:26:08.971611 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.971628 kubelet[3186]: W0517 00:26:08.971627 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.971777 kubelet[3186]: E0517 00:26:08.971642 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.971872 kubelet[3186]: E0517 00:26:08.971855 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.971922 kubelet[3186]: W0517 00:26:08.971872 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.971922 kubelet[3186]: E0517 00:26:08.971885 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.972112 kubelet[3186]: E0517 00:26:08.972096 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.972170 kubelet[3186]: W0517 00:26:08.972112 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.972170 kubelet[3186]: E0517 00:26:08.972125 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.972361 kubelet[3186]: E0517 00:26:08.972346 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.972446 kubelet[3186]: W0517 00:26:08.972362 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.972446 kubelet[3186]: E0517 00:26:08.972376 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.973820 kubelet[3186]: E0517 00:26:08.972955 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.973820 kubelet[3186]: W0517 00:26:08.972972 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.973820 kubelet[3186]: E0517 00:26:08.972999 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.973820 kubelet[3186]: E0517 00:26:08.973220 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.973820 kubelet[3186]: W0517 00:26:08.973229 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.973820 kubelet[3186]: E0517 00:26:08.973241 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.973820 kubelet[3186]: E0517 00:26:08.973456 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.973820 kubelet[3186]: W0517 00:26:08.973466 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.973820 kubelet[3186]: E0517 00:26:08.973479 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.975755 kubelet[3186]: E0517 00:26:08.975720 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.975755 kubelet[3186]: W0517 00:26:08.975747 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.975889 kubelet[3186]: E0517 00:26:08.975762 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.976080 kubelet[3186]: E0517 00:26:08.976058 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.976080 kubelet[3186]: W0517 00:26:08.976073 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.976170 kubelet[3186]: E0517 00:26:08.976088 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.997579 kubelet[3186]: E0517 00:26:08.997537 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.997579 kubelet[3186]: W0517 00:26:08.997572 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.997802 kubelet[3186]: E0517 00:26:08.997598 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.998740 kubelet[3186]: E0517 00:26:08.998715 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.998740 kubelet[3186]: W0517 00:26:08.998737 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.999253 kubelet[3186]: E0517 00:26:08.999224 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:08.999582 kubelet[3186]: E0517 00:26:08.999561 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:08.999582 kubelet[3186]: W0517 00:26:08.999577 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:08.999725 kubelet[3186]: E0517 00:26:08.999600 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.002447 kubelet[3186]: E0517 00:26:09.001661 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.002447 kubelet[3186]: W0517 00:26:09.001679 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.002447 kubelet[3186]: E0517 00:26:09.001792 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.002447 kubelet[3186]: E0517 00:26:09.002057 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.002447 kubelet[3186]: W0517 00:26:09.002086 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.002447 kubelet[3186]: E0517 00:26:09.002167 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.002759 kubelet[3186]: E0517 00:26:09.002478 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.002759 kubelet[3186]: W0517 00:26:09.002489 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.002759 kubelet[3186]: E0517 00:26:09.002539 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.003117 kubelet[3186]: E0517 00:26:09.003098 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.003117 kubelet[3186]: W0517 00:26:09.003116 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.003243 kubelet[3186]: E0517 00:26:09.003147 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.005504 kubelet[3186]: E0517 00:26:09.005468 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.005504 kubelet[3186]: W0517 00:26:09.005486 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.005640 kubelet[3186]: E0517 00:26:09.005524 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.006516 kubelet[3186]: E0517 00:26:09.006497 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.006516 kubelet[3186]: W0517 00:26:09.006515 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.006669 kubelet[3186]: E0517 00:26:09.006536 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.006772 kubelet[3186]: E0517 00:26:09.006757 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.006848 kubelet[3186]: W0517 00:26:09.006773 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.006848 kubelet[3186]: E0517 00:26:09.006785 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.006987 kubelet[3186]: E0517 00:26:09.006973 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.007041 kubelet[3186]: W0517 00:26:09.006988 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.007041 kubelet[3186]: E0517 00:26:09.007001 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.007260 kubelet[3186]: E0517 00:26:09.007245 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.007315 kubelet[3186]: W0517 00:26:09.007261 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.007362 kubelet[3186]: E0517 00:26:09.007344 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.008152 kubelet[3186]: E0517 00:26:09.008134 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.008234 kubelet[3186]: W0517 00:26:09.008153 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.010440 kubelet[3186]: E0517 00:26:09.009060 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.010440 kubelet[3186]: E0517 00:26:09.009135 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.010440 kubelet[3186]: W0517 00:26:09.009144 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.010440 kubelet[3186]: E0517 00:26:09.009223 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.010440 kubelet[3186]: E0517 00:26:09.009434 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.010440 kubelet[3186]: W0517 00:26:09.009452 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.010440 kubelet[3186]: E0517 00:26:09.009560 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.010440 kubelet[3186]: E0517 00:26:09.009706 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.010440 kubelet[3186]: W0517 00:26:09.009714 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.010440 kubelet[3186]: E0517 00:26:09.009726 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.011567 kubelet[3186]: E0517 00:26:09.010253 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.011567 kubelet[3186]: W0517 00:26:09.010265 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.011567 kubelet[3186]: E0517 00:26:09.010280 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.011567 kubelet[3186]: E0517 00:26:09.011066 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:26:09.011567 kubelet[3186]: W0517 00:26:09.011078 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:26:09.011567 kubelet[3186]: E0517 00:26:09.011091 3186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:26:09.032850 kubelet[3186]: I0517 00:26:09.032770 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dff8975f-d687q" podStartSLOduration=1.631550542 podStartE2EDuration="4.032728142s" podCreationTimestamp="2025-05-17 00:26:05 +0000 UTC" firstStartedPulling="2025-05-17 00:26:06.082616773 +0000 UTC m=+22.396939802" lastFinishedPulling="2025-05-17 00:26:08.483794351 +0000 UTC m=+24.798117402" observedRunningTime="2025-05-17 00:26:09.032237612 +0000 UTC m=+25.346560662" watchObservedRunningTime="2025-05-17 00:26:09.032728142 +0000 UTC m=+25.347051186" May 17 00:26:09.687637 containerd[1974]: time="2025-05-17T00:26:09.687587953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:09.688763 containerd[1974]: time="2025-05-17T00:26:09.688617535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:26:09.690591 containerd[1974]: time="2025-05-17T00:26:09.690534939Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:09.694583 containerd[1974]: time="2025-05-17T00:26:09.693730367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:09.694739 containerd[1974]: time="2025-05-17T00:26:09.694712873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.210355939s" May 17 00:26:09.694855 containerd[1974]: time="2025-05-17T00:26:09.694798853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:26:09.697839 containerd[1974]: time="2025-05-17T00:26:09.697787646Z" level=info msg="CreateContainer within sandbox \"cdf621924179ade85b5f7748f09b82c10817339a73bd22db337d104f8f78a04a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:26:09.722166 containerd[1974]: time="2025-05-17T00:26:09.722102883Z" level=info msg="CreateContainer within sandbox \"cdf621924179ade85b5f7748f09b82c10817339a73bd22db337d104f8f78a04a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0a14d922bb1af7972ae2011cfec6d478942a3c84ee70c69d484445924bede8c6\"" May 17 00:26:09.723624 containerd[1974]: time="2025-05-17T00:26:09.723576216Z" level=info msg="StartContainer for \"0a14d922bb1af7972ae2011cfec6d478942a3c84ee70c69d484445924bede8c6\"" May 17 00:26:09.765638 systemd[1]: Started cri-containerd-0a14d922bb1af7972ae2011cfec6d478942a3c84ee70c69d484445924bede8c6.scope - libcontainer container 0a14d922bb1af7972ae2011cfec6d478942a3c84ee70c69d484445924bede8c6. May 17 00:26:09.796790 kubelet[3186]: E0517 00:26:09.796730 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gndq" podUID="d1caf04b-d279-4556-9507-efceb97ef03e" May 17 00:26:09.817643 containerd[1974]: time="2025-05-17T00:26:09.817578803Z" level=info msg="StartContainer for \"0a14d922bb1af7972ae2011cfec6d478942a3c84ee70c69d484445924bede8c6\" returns successfully" May 17 00:26:09.832508 systemd[1]: cri-containerd-0a14d922bb1af7972ae2011cfec6d478942a3c84ee70c69d484445924bede8c6.scope: Deactivated successfully. May 17 00:26:09.867645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a14d922bb1af7972ae2011cfec6d478942a3c84ee70c69d484445924bede8c6-rootfs.mount: Deactivated successfully. May 17 00:26:09.959776 kubelet[3186]: I0517 00:26:09.959238 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:26:10.021381 containerd[1974]: time="2025-05-17T00:26:10.021261633Z" level=info msg="shim disconnected" id=0a14d922bb1af7972ae2011cfec6d478942a3c84ee70c69d484445924bede8c6 namespace=k8s.io May 17 00:26:10.021656 containerd[1974]: time="2025-05-17T00:26:10.021392763Z" level=warning msg="cleaning up after shim disconnected" id=0a14d922bb1af7972ae2011cfec6d478942a3c84ee70c69d484445924bede8c6 namespace=k8s.io May 17 00:26:10.021656 containerd[1974]: time="2025-05-17T00:26:10.021406951Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:26:10.964277 containerd[1974]: time="2025-05-17T00:26:10.964100434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:26:11.794830 kubelet[3186]: E0517 00:26:11.794534 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gndq" podUID="d1caf04b-d279-4556-9507-efceb97ef03e" May 17 00:26:13.795894 kubelet[3186]: E0517 00:26:13.795834 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gndq" podUID="d1caf04b-d279-4556-9507-efceb97ef03e" May 17 00:26:14.262824 containerd[1974]: time="2025-05-17T00:26:14.262643876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:14.264125 containerd[1974]: time="2025-05-17T00:26:14.263938962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:26:14.266707 containerd[1974]: time="2025-05-17T00:26:14.266343291Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:14.268899 containerd[1974]: time="2025-05-17T00:26:14.268852403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:14.269734 containerd[1974]: time="2025-05-17T00:26:14.269487409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.305354981s" May 17 00:26:14.269734 containerd[1974]: time="2025-05-17T00:26:14.269516200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:26:14.271877 containerd[1974]: time="2025-05-17T00:26:14.271852425Z" level=info msg="CreateContainer within sandbox \"cdf621924179ade85b5f7748f09b82c10817339a73bd22db337d104f8f78a04a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:26:14.293126 containerd[1974]: time="2025-05-17T00:26:14.292982563Z" level=info msg="CreateContainer within sandbox \"cdf621924179ade85b5f7748f09b82c10817339a73bd22db337d104f8f78a04a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fcd8a751ac85e49f01846e5319c4de249b61c927ca6635a8ba15413076e36a93\"" May 17 00:26:14.293792 containerd[1974]: time="2025-05-17T00:26:14.293583078Z" level=info msg="StartContainer for \"fcd8a751ac85e49f01846e5319c4de249b61c927ca6635a8ba15413076e36a93\"" May 17 00:26:14.328602 systemd[1]: Started cri-containerd-fcd8a751ac85e49f01846e5319c4de249b61c927ca6635a8ba15413076e36a93.scope - libcontainer container fcd8a751ac85e49f01846e5319c4de249b61c927ca6635a8ba15413076e36a93. May 17 00:26:14.371464 containerd[1974]: time="2025-05-17T00:26:14.371318534Z" level=info msg="StartContainer for \"fcd8a751ac85e49f01846e5319c4de249b61c927ca6635a8ba15413076e36a93\" returns successfully" May 17 00:26:15.062905 systemd[1]: cri-containerd-fcd8a751ac85e49f01846e5319c4de249b61c927ca6635a8ba15413076e36a93.scope: Deactivated successfully. May 17 00:26:15.107157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcd8a751ac85e49f01846e5319c4de249b61c927ca6635a8ba15413076e36a93-rootfs.mount: Deactivated successfully. May 17 00:26:15.130371 kubelet[3186]: I0517 00:26:15.129587 3186 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:26:15.306130 containerd[1974]: time="2025-05-17T00:26:15.306065946Z" level=info msg="shim disconnected" id=fcd8a751ac85e49f01846e5319c4de249b61c927ca6635a8ba15413076e36a93 namespace=k8s.io May 17 00:26:15.306130 containerd[1974]: time="2025-05-17T00:26:15.306128373Z" level=warning msg="cleaning up after shim disconnected" id=fcd8a751ac85e49f01846e5319c4de249b61c927ca6635a8ba15413076e36a93 namespace=k8s.io May 17 00:26:15.306130 containerd[1974]: time="2025-05-17T00:26:15.306137191Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:26:15.350488 systemd[1]: Created slice kubepods-besteffort-pod8a71913d_89eb_4c9e_9dfe_b7eb7c1fd2b5.slice - libcontainer container kubepods-besteffort-pod8a71913d_89eb_4c9e_9dfe_b7eb7c1fd2b5.slice. May 17 00:26:15.354557 kubelet[3186]: I0517 00:26:15.352773 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5-tigera-ca-bundle\") pod \"calico-kube-controllers-6b48558975-rlpts\" (UID: \"8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5\") " pod="calico-system/calico-kube-controllers-6b48558975-rlpts" May 17 00:26:15.354557 kubelet[3186]: I0517 00:26:15.352809 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-whisker-backend-key-pair\") pod \"whisker-79c6d7d9f5-8hp27\" (UID: \"633b52bc-acea-4d38-819a-9ad0c0dcf6e5\") " pod="calico-system/whisker-79c6d7d9f5-8hp27" May 17 00:26:15.354557 kubelet[3186]: I0517 00:26:15.352830 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plv5x\" (UniqueName: \"kubernetes.io/projected/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-kube-api-access-plv5x\") pod \"whisker-79c6d7d9f5-8hp27\" (UID: \"633b52bc-acea-4d38-819a-9ad0c0dcf6e5\") " pod="calico-system/whisker-79c6d7d9f5-8hp27" May 17 00:26:15.354557 kubelet[3186]: I0517 00:26:15.352851 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2t85\" (UniqueName: \"kubernetes.io/projected/8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5-kube-api-access-s2t85\") pod \"calico-kube-controllers-6b48558975-rlpts\" (UID: \"8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5\") " pod="calico-system/calico-kube-controllers-6b48558975-rlpts" May 17 00:26:15.354557 kubelet[3186]: I0517 00:26:15.352873 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-whisker-ca-bundle\") pod \"whisker-79c6d7d9f5-8hp27\" (UID: \"633b52bc-acea-4d38-819a-9ad0c0dcf6e5\") " pod="calico-system/whisker-79c6d7d9f5-8hp27" May 17 00:26:15.375004 systemd[1]: Created slice kubepods-besteffort-pod633b52bc_acea_4d38_819a_9ad0c0dcf6e5.slice - libcontainer container kubepods-besteffort-pod633b52bc_acea_4d38_819a_9ad0c0dcf6e5.slice. May 17 00:26:15.381037 systemd[1]: Created slice kubepods-besteffort-podfb1a5092_3a29_4f17_a060_ae80b6cdd361.slice - libcontainer container kubepods-besteffort-podfb1a5092_3a29_4f17_a060_ae80b6cdd361.slice. May 17 00:26:15.389679 systemd[1]: Created slice kubepods-besteffort-poda481fc84_c654_4b7a_8053_527437371f0f.slice - libcontainer container kubepods-besteffort-poda481fc84_c654_4b7a_8053_527437371f0f.slice. May 17 00:26:15.397065 systemd[1]: Created slice kubepods-burstable-pod9832b571_3650_40cf_9d5c_47e3967ad978.slice - libcontainer container kubepods-burstable-pod9832b571_3650_40cf_9d5c_47e3967ad978.slice. May 17 00:26:15.409259 systemd[1]: Created slice kubepods-besteffort-pod6c6feb10_5e10_4718_ae2e_34e0ec7b697f.slice - libcontainer container kubepods-besteffort-pod6c6feb10_5e10_4718_ae2e_34e0ec7b697f.slice. May 17 00:26:15.417784 systemd[1]: Created slice kubepods-burstable-pod7dae1d2e_61e6_48f7_aad5_2ce4e3746c6c.slice - libcontainer container kubepods-burstable-pod7dae1d2e_61e6_48f7_aad5_2ce4e3746c6c.slice. May 17 00:26:15.454089 kubelet[3186]: I0517 00:26:15.453709 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832b571-3650-40cf-9d5c-47e3967ad978-config-volume\") pod \"coredns-7c65d6cfc9-jsbnl\" (UID: \"9832b571-3650-40cf-9d5c-47e3967ad978\") " pod="kube-system/coredns-7c65d6cfc9-jsbnl" May 17 00:26:15.454089 kubelet[3186]: I0517 00:26:15.453748 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz5sl\" (UniqueName: \"kubernetes.io/projected/9832b571-3650-40cf-9d5c-47e3967ad978-kube-api-access-dz5sl\") pod \"coredns-7c65d6cfc9-jsbnl\" (UID: \"9832b571-3650-40cf-9d5c-47e3967ad978\") " pod="kube-system/coredns-7c65d6cfc9-jsbnl" May 17 00:26:15.454089 kubelet[3186]: I0517 00:26:15.453793 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m75ls\" (UniqueName: \"kubernetes.io/projected/7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c-kube-api-access-m75ls\") pod \"coredns-7c65d6cfc9-5sswv\" (UID: \"7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c\") " pod="kube-system/coredns-7c65d6cfc9-5sswv" May 17 00:26:15.454089 kubelet[3186]: I0517 00:26:15.453812 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a481fc84-c654-4b7a-8053-527437371f0f-calico-apiserver-certs\") pod \"calico-apiserver-5f57798587-b77km\" (UID: \"a481fc84-c654-4b7a-8053-527437371f0f\") " pod="calico-apiserver/calico-apiserver-5f57798587-b77km" May 17 00:26:15.454089 kubelet[3186]: I0517 00:26:15.453846 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6feb10-5e10-4718-ae2e-34e0ec7b697f-config\") pod \"goldmane-8f77d7b6c-ffn6m\" (UID: \"6c6feb10-5e10-4718-ae2e-34e0ec7b697f\") " pod="calico-system/goldmane-8f77d7b6c-ffn6m" May 17 00:26:15.454337 kubelet[3186]: I0517 00:26:15.453878 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-296m4\" (UniqueName: \"kubernetes.io/projected/6c6feb10-5e10-4718-ae2e-34e0ec7b697f-kube-api-access-296m4\") pod \"goldmane-8f77d7b6c-ffn6m\" (UID: \"6c6feb10-5e10-4718-ae2e-34e0ec7b697f\") " pod="calico-system/goldmane-8f77d7b6c-ffn6m" May 17 00:26:15.454337 kubelet[3186]: I0517 00:26:15.454054 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62j9j\" (UniqueName: \"kubernetes.io/projected/fb1a5092-3a29-4f17-a060-ae80b6cdd361-kube-api-access-62j9j\") pod \"calico-apiserver-5f57798587-cckmk\" (UID: \"fb1a5092-3a29-4f17-a060-ae80b6cdd361\") " pod="calico-apiserver/calico-apiserver-5f57798587-cckmk" May 17 00:26:15.454337 kubelet[3186]: I0517 00:26:15.454077 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrq4v\" (UniqueName: \"kubernetes.io/projected/a481fc84-c654-4b7a-8053-527437371f0f-kube-api-access-hrq4v\") pod \"calico-apiserver-5f57798587-b77km\" (UID: \"a481fc84-c654-4b7a-8053-527437371f0f\") " pod="calico-apiserver/calico-apiserver-5f57798587-b77km" May 17 00:26:15.454337 kubelet[3186]: I0517 00:26:15.454095 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fb1a5092-3a29-4f17-a060-ae80b6cdd361-calico-apiserver-certs\") pod \"calico-apiserver-5f57798587-cckmk\" (UID: \"fb1a5092-3a29-4f17-a060-ae80b6cdd361\") " pod="calico-apiserver/calico-apiserver-5f57798587-cckmk" May 17 00:26:15.454337 kubelet[3186]: I0517 00:26:15.454112 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c-config-volume\") pod \"coredns-7c65d6cfc9-5sswv\" (UID: \"7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c\") " pod="kube-system/coredns-7c65d6cfc9-5sswv" May 17 00:26:15.456499 kubelet[3186]: I0517 00:26:15.454128 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6feb10-5e10-4718-ae2e-34e0ec7b697f-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-ffn6m\" (UID: \"6c6feb10-5e10-4718-ae2e-34e0ec7b697f\") " pod="calico-system/goldmane-8f77d7b6c-ffn6m" May 17 00:26:15.456499 kubelet[3186]: I0517 00:26:15.454157 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6c6feb10-5e10-4718-ae2e-34e0ec7b697f-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-ffn6m\" (UID: \"6c6feb10-5e10-4718-ae2e-34e0ec7b697f\") " pod="calico-system/goldmane-8f77d7b6c-ffn6m" May 17 00:26:15.672375 containerd[1974]: time="2025-05-17T00:26:15.672221264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b48558975-rlpts,Uid:8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5,Namespace:calico-system,Attempt:0,}" May 17 00:26:15.679017 containerd[1974]: time="2025-05-17T00:26:15.678922529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c6d7d9f5-8hp27,Uid:633b52bc-acea-4d38-819a-9ad0c0dcf6e5,Namespace:calico-system,Attempt:0,}" May 17 00:26:15.684934 containerd[1974]: time="2025-05-17T00:26:15.684895819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f57798587-cckmk,Uid:fb1a5092-3a29-4f17-a060-ae80b6cdd361,Namespace:calico-apiserver,Attempt:0,}" May 17 00:26:15.698973 containerd[1974]: time="2025-05-17T00:26:15.698224938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f57798587-b77km,Uid:a481fc84-c654-4b7a-8053-527437371f0f,Namespace:calico-apiserver,Attempt:0,}" May 17 00:26:15.705473 containerd[1974]: time="2025-05-17T00:26:15.705414831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jsbnl,Uid:9832b571-3650-40cf-9d5c-47e3967ad978,Namespace:kube-system,Attempt:0,}" May 17 00:26:15.714497 containerd[1974]: time="2025-05-17T00:26:15.714439599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-ffn6m,Uid:6c6feb10-5e10-4718-ae2e-34e0ec7b697f,Namespace:calico-system,Attempt:0,}" May 17 00:26:15.726689 containerd[1974]: time="2025-05-17T00:26:15.726633797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5sswv,Uid:7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c,Namespace:kube-system,Attempt:0,}" May 17 00:26:15.807531 systemd[1]: Created slice kubepods-besteffort-podd1caf04b_d279_4556_9507_efceb97ef03e.slice - libcontainer container kubepods-besteffort-podd1caf04b_d279_4556_9507_efceb97ef03e.slice. May 17 00:26:15.811622 containerd[1974]: time="2025-05-17T00:26:15.811588026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gndq,Uid:d1caf04b-d279-4556-9507-efceb97ef03e,Namespace:calico-system,Attempt:0,}" May 17 00:26:16.003933 containerd[1974]: time="2025-05-17T00:26:16.003573646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:26:16.195151 containerd[1974]: time="2025-05-17T00:26:16.195075074Z" level=error msg="Failed to destroy network for sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.198105 containerd[1974]: time="2025-05-17T00:26:16.198043446Z" level=error msg="Failed to destroy network for sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.215113 containerd[1974]: time="2025-05-17T00:26:16.213406911Z" level=error msg="Failed to destroy network for sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.215568 containerd[1974]: time="2025-05-17T00:26:16.215537817Z" level=error msg="encountered an error cleaning up failed sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.215725 containerd[1974]: time="2025-05-17T00:26:16.215705533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gndq,Uid:d1caf04b-d279-4556-9507-efceb97ef03e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.215874 containerd[1974]: time="2025-05-17T00:26:16.214533388Z" level=error msg="encountered an error cleaning up failed sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.215959 containerd[1974]: time="2025-05-17T00:26:16.215942024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f57798587-b77km,Uid:a481fc84-c654-4b7a-8053-527437371f0f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.223672 containerd[1974]: time="2025-05-17T00:26:16.214540402Z" level=error msg="encountered an error cleaning up failed sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.224136 kubelet[3186]: E0517 00:26:16.224093 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.224732 kubelet[3186]: E0517 00:26:16.224597 3186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gndq" May 17 00:26:16.224732 kubelet[3186]: E0517 00:26:16.224642 3186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gndq" May 17 00:26:16.225166 kubelet[3186]: E0517 00:26:16.224497 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.225166 kubelet[3186]: E0517 00:26:16.224859 3186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f57798587-b77km" May 17 00:26:16.225166 kubelet[3186]: E0517 00:26:16.224889 3186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f57798587-b77km" May 17 00:26:16.225452 kubelet[3186]: E0517 00:26:16.224931 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4gndq_calico-system(d1caf04b-d279-4556-9507-efceb97ef03e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4gndq_calico-system(d1caf04b-d279-4556-9507-efceb97ef03e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4gndq" podUID="d1caf04b-d279-4556-9507-efceb97ef03e" May 17 00:26:16.225621 containerd[1974]: time="2025-05-17T00:26:16.223874641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f57798587-cckmk,Uid:fb1a5092-3a29-4f17-a060-ae80b6cdd361,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.225621 containerd[1974]: time="2025-05-17T00:26:16.214637016Z" level=error msg="Failed to destroy network for sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.225813 kubelet[3186]: E0517 00:26:16.225521 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f57798587-b77km_calico-apiserver(a481fc84-c654-4b7a-8053-527437371f0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f57798587-b77km_calico-apiserver(a481fc84-c654-4b7a-8053-527437371f0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f57798587-b77km" podUID="a481fc84-c654-4b7a-8053-527437371f0f" May 17 00:26:16.226117 kubelet[3186]: E0517 00:26:16.226098 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.226310 kubelet[3186]: E0517 00:26:16.226247 3186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f57798587-cckmk" May 17 00:26:16.226310 kubelet[3186]: E0517 00:26:16.226268 3186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f57798587-cckmk" May 17 00:26:16.227130 kubelet[3186]: E0517 00:26:16.226398 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f57798587-cckmk_calico-apiserver(fb1a5092-3a29-4f17-a060-ae80b6cdd361)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f57798587-cckmk_calico-apiserver(fb1a5092-3a29-4f17-a060-ae80b6cdd361)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f57798587-cckmk" podUID="fb1a5092-3a29-4f17-a060-ae80b6cdd361" May 17 00:26:16.227130 kubelet[3186]: E0517 00:26:16.226997 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.227130 kubelet[3186]: E0517 00:26:16.227024 3186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-ffn6m" May 17 00:26:16.227986 containerd[1974]: time="2025-05-17T00:26:16.226727267Z" level=error msg="encountered an error cleaning up failed sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.227986 containerd[1974]: time="2025-05-17T00:26:16.226773999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-ffn6m,Uid:6c6feb10-5e10-4718-ae2e-34e0ec7b697f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.228100 kubelet[3186]: E0517 00:26:16.227040 3186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-ffn6m" May 17 00:26:16.228100 kubelet[3186]: E0517 00:26:16.227078 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-ffn6m_calico-system(6c6feb10-5e10-4718-ae2e-34e0ec7b697f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-ffn6m_calico-system(6c6feb10-5e10-4718-ae2e-34e0ec7b697f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-ffn6m" podUID="6c6feb10-5e10-4718-ae2e-34e0ec7b697f" May 17 00:26:16.232277 containerd[1974]: time="2025-05-17T00:26:16.232220127Z" level=error msg="Failed to destroy network for sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.232527 containerd[1974]: time="2025-05-17T00:26:16.232260012Z" level=error msg="Failed to destroy network for sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.233187 containerd[1974]: time="2025-05-17T00:26:16.233161564Z" level=error msg="encountered an error cleaning up failed sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.233563 containerd[1974]: time="2025-05-17T00:26:16.233394274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c6d7d9f5-8hp27,Uid:633b52bc-acea-4d38-819a-9ad0c0dcf6e5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.233766 containerd[1974]: time="2025-05-17T00:26:16.233330227Z" level=error msg="encountered an error cleaning up failed sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.233983 containerd[1974]: time="2025-05-17T00:26:16.233925309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5sswv,Uid:7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.235132 kubelet[3186]: E0517 00:26:16.235088 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.236623 kubelet[3186]: E0517 00:26:16.235141 3186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5sswv" May 17 00:26:16.236623 kubelet[3186]: E0517 00:26:16.235162 3186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5sswv" May 17 00:26:16.236623 kubelet[3186]: E0517 00:26:16.235197 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-5sswv_kube-system(7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-5sswv_kube-system(7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5sswv" podUID="7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c" May 17 00:26:16.236807 containerd[1974]: time="2025-05-17T00:26:16.235466223Z" level=error msg="Failed to destroy network for sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.236807 containerd[1974]: time="2025-05-17T00:26:16.235736350Z" level=error msg="encountered an error cleaning up failed sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.236807 containerd[1974]: time="2025-05-17T00:26:16.235771187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jsbnl,Uid:9832b571-3650-40cf-9d5c-47e3967ad978,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.236961 kubelet[3186]: E0517 00:26:16.235236 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.236961 kubelet[3186]: E0517 00:26:16.235251 3186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79c6d7d9f5-8hp27" May 17 00:26:16.236961 kubelet[3186]: E0517 00:26:16.235266 3186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79c6d7d9f5-8hp27" May 17 00:26:16.237083 kubelet[3186]: E0517 00:26:16.235326 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79c6d7d9f5-8hp27_calico-system(633b52bc-acea-4d38-819a-9ad0c0dcf6e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79c6d7d9f5-8hp27_calico-system(633b52bc-acea-4d38-819a-9ad0c0dcf6e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79c6d7d9f5-8hp27" podUID="633b52bc-acea-4d38-819a-9ad0c0dcf6e5" May 17 00:26:16.237083 kubelet[3186]: E0517 00:26:16.236689 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.237083 kubelet[3186]: E0517 00:26:16.236730 3186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jsbnl" May 17 00:26:16.237276 kubelet[3186]: E0517 00:26:16.236755 3186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jsbnl" May 17 00:26:16.237276 kubelet[3186]: E0517 00:26:16.236789 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-jsbnl_kube-system(9832b571-3650-40cf-9d5c-47e3967ad978)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-jsbnl_kube-system(9832b571-3650-40cf-9d5c-47e3967ad978)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-jsbnl" podUID="9832b571-3650-40cf-9d5c-47e3967ad978" May 17 00:26:16.237826 containerd[1974]: time="2025-05-17T00:26:16.237785141Z" level=error msg="Failed to destroy network for sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.239216 containerd[1974]: time="2025-05-17T00:26:16.238873783Z" level=error msg="encountered an error cleaning up failed sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.239216 containerd[1974]: time="2025-05-17T00:26:16.238917531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b48558975-rlpts,Uid:8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.239364 kubelet[3186]: E0517 00:26:16.239088 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:16.239364 kubelet[3186]: E0517 00:26:16.239130 3186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b48558975-rlpts" May 17 00:26:16.239364 kubelet[3186]: E0517 00:26:16.239147 3186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b48558975-rlpts" May 17 00:26:16.239732 kubelet[3186]: E0517 00:26:16.239191 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b48558975-rlpts_calico-system(8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b48558975-rlpts_calico-system(8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b48558975-rlpts" podUID="8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5" May 17 00:26:16.987729 kubelet[3186]: I0517 00:26:16.987070 3186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:16.995083 kubelet[3186]: I0517 00:26:16.995059 3186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:17.040786 kubelet[3186]: I0517 00:26:17.040716 3186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:17.045163 kubelet[3186]: I0517 00:26:17.044893 3186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:17.051458 kubelet[3186]: I0517 00:26:17.050515 3186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:17.056143 kubelet[3186]: I0517 00:26:17.056112 3186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:17.060447 kubelet[3186]: I0517 00:26:17.060391 3186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:17.071624 kubelet[3186]: I0517 00:26:17.071593 3186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:17.091719 containerd[1974]: time="2025-05-17T00:26:17.091667915Z" level=info msg="StopPodSandbox for \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\"" May 17 00:26:17.094776 containerd[1974]: time="2025-05-17T00:26:17.094411289Z" level=info msg="Ensure that sandbox 11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215 in task-service has been cleanup successfully" May 17 00:26:17.099272 containerd[1974]: time="2025-05-17T00:26:17.099232184Z" level=info msg="StopPodSandbox for \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\"" May 17 00:26:17.099703 containerd[1974]: time="2025-05-17T00:26:17.099679331Z" level=info msg="Ensure that sandbox 7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79 in task-service has been cleanup successfully" May 17 00:26:17.101922 containerd[1974]: time="2025-05-17T00:26:17.101792348Z" level=info msg="StopPodSandbox for \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\"" May 17 00:26:17.102266 containerd[1974]: time="2025-05-17T00:26:17.102198507Z" level=info msg="StopPodSandbox for \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\"" May 17 00:26:17.122073 containerd[1974]: time="2025-05-17T00:26:17.121571052Z" level=info msg="Ensure that sandbox c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb in task-service has been cleanup successfully" May 17 00:26:17.124448 containerd[1974]: time="2025-05-17T00:26:17.121990865Z" level=info msg="StopPodSandbox for \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\"" May 17 00:26:17.124840 containerd[1974]: time="2025-05-17T00:26:17.124805307Z" level=info msg="Ensure that sandbox a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9 in task-service has been cleanup successfully" May 17 00:26:17.125920 containerd[1974]: time="2025-05-17T00:26:17.122501514Z" level=info msg="Ensure that sandbox cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047 in task-service has been cleanup successfully" May 17 00:26:17.127525 containerd[1974]: time="2025-05-17T00:26:17.122037184Z" level=info msg="StopPodSandbox for \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\"" May 17 00:26:17.127725 containerd[1974]: time="2025-05-17T00:26:17.127607381Z" level=info msg="Ensure that sandbox 91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568 in task-service has been cleanup successfully" May 17 00:26:17.129690 containerd[1974]: time="2025-05-17T00:26:17.122072171Z" level=info msg="StopPodSandbox for \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\"" May 17 00:26:17.130170 containerd[1974]: time="2025-05-17T00:26:17.130148458Z" level=info msg="Ensure that sandbox afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67 in task-service has been cleanup successfully" May 17 00:26:17.130790 containerd[1974]: time="2025-05-17T00:26:17.122098850Z" level=info msg="StopPodSandbox for \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\"" May 17 00:26:17.136834 containerd[1974]: time="2025-05-17T00:26:17.136306378Z" level=info msg="Ensure that sandbox 61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d in task-service has been cleanup successfully" May 17 00:26:17.346364 containerd[1974]: time="2025-05-17T00:26:17.345899128Z" level=error msg="StopPodSandbox for \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\" failed" error="failed to destroy network for sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:17.364807 kubelet[3186]: E0517 00:26:17.346204 3186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:17.382031 kubelet[3186]: E0517 00:26:17.364845 3186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568"} May 17 00:26:17.382031 kubelet[3186]: E0517 00:26:17.381257 3186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"633b52bc-acea-4d38-819a-9ad0c0dcf6e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:17.382031 kubelet[3186]: E0517 00:26:17.381293 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"633b52bc-acea-4d38-819a-9ad0c0dcf6e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79c6d7d9f5-8hp27" podUID="633b52bc-acea-4d38-819a-9ad0c0dcf6e5" May 17 00:26:17.386662 kubelet[3186]: E0517 00:26:17.385013 3186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:17.386662 kubelet[3186]: E0517 00:26:17.385384 3186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb"} May 17 00:26:17.386662 kubelet[3186]: E0517 00:26:17.385472 3186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c6feb10-5e10-4718-ae2e-34e0ec7b697f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:17.386662 kubelet[3186]: E0517 00:26:17.385520 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c6feb10-5e10-4718-ae2e-34e0ec7b697f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-ffn6m" podUID="6c6feb10-5e10-4718-ae2e-34e0ec7b697f" May 17 00:26:17.386950 containerd[1974]: time="2025-05-17T00:26:17.383809382Z" level=error msg="StopPodSandbox for \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\" failed" error="failed to destroy network for sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:17.387930 containerd[1974]: time="2025-05-17T00:26:17.387877216Z" level=error msg="StopPodSandbox for \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\" failed" error="failed to destroy network for sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:17.388586 containerd[1974]: time="2025-05-17T00:26:17.388238656Z" level=error msg="StopPodSandbox for \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\" failed" error="failed to destroy network for sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:17.388666 kubelet[3186]: E0517 00:26:17.388094 3186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:17.388666 kubelet[3186]: E0517 00:26:17.388129 3186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79"} May 17 00:26:17.388666 kubelet[3186]: E0517 00:26:17.388375 3186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:17.388666 kubelet[3186]: E0517 00:26:17.388438 3186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047"} May 17 00:26:17.388666 kubelet[3186]: E0517 00:26:17.388476 3186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9832b571-3650-40cf-9d5c-47e3967ad978\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:17.388958 kubelet[3186]: E0517 00:26:17.388509 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9832b571-3650-40cf-9d5c-47e3967ad978\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-jsbnl" podUID="9832b571-3650-40cf-9d5c-47e3967ad978" May 17 00:26:17.390910 kubelet[3186]: E0517 00:26:17.388170 3186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb1a5092-3a29-4f17-a060-ae80b6cdd361\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:17.390910 kubelet[3186]: E0517 00:26:17.389481 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb1a5092-3a29-4f17-a060-ae80b6cdd361\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f57798587-cckmk" podUID="fb1a5092-3a29-4f17-a060-ae80b6cdd361" May 17 00:26:17.413176 containerd[1974]: time="2025-05-17T00:26:17.413105500Z" level=error msg="StopPodSandbox for \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\" failed" error="failed to destroy network for sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:17.413433 kubelet[3186]: E0517 00:26:17.413374 3186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:17.413671 kubelet[3186]: E0517 00:26:17.413474 3186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9"} May 17 00:26:17.413671 kubelet[3186]: E0517 00:26:17.413519 3186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:17.413671 kubelet[3186]: E0517 00:26:17.413557 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5sswv" podUID="7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c" May 17 00:26:17.421764 containerd[1974]: time="2025-05-17T00:26:17.421653675Z" level=error msg="StopPodSandbox for \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\" failed" error="failed to destroy network for sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:17.424448 kubelet[3186]: E0517 00:26:17.423860 3186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:17.424448 kubelet[3186]: E0517 00:26:17.423918 3186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215"} May 17 00:26:17.424448 kubelet[3186]: E0517 00:26:17.423967 3186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:17.424448 kubelet[3186]: E0517 00:26:17.424006 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b48558975-rlpts" podUID="8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5" May 17 00:26:17.426591 containerd[1974]: time="2025-05-17T00:26:17.426415418Z" level=error msg="StopPodSandbox for \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\" failed" error="failed to destroy network for sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:17.428049 kubelet[3186]: E0517 00:26:17.428006 3186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:17.428283 kubelet[3186]: E0517 00:26:17.428065 3186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67"} May 17 00:26:17.428283 kubelet[3186]: E0517 00:26:17.428113 3186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1caf04b-d279-4556-9507-efceb97ef03e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:17.428283 kubelet[3186]: E0517 00:26:17.428144 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1caf04b-d279-4556-9507-efceb97ef03e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4gndq" podUID="d1caf04b-d279-4556-9507-efceb97ef03e" May 17 00:26:17.428902 kubelet[3186]: E0517 00:26:17.428498 3186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:17.428902 kubelet[3186]: E0517 00:26:17.428531 3186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d"} May 17 00:26:17.428902 kubelet[3186]: E0517 00:26:17.428567 3186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a481fc84-c654-4b7a-8053-527437371f0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:26:17.428902 kubelet[3186]: E0517 00:26:17.428596 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a481fc84-c654-4b7a-8053-527437371f0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f57798587-b77km" podUID="a481fc84-c654-4b7a-8053-527437371f0f" May 17 00:26:17.429180 containerd[1974]: time="2025-05-17T00:26:17.428320099Z" level=error msg="StopPodSandbox for \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\" failed" error="failed to destroy network for sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:26:22.123285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396921288.mount: Deactivated successfully. May 17 00:26:22.213037 containerd[1974]: time="2025-05-17T00:26:22.212354306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:22.215202 containerd[1974]: time="2025-05-17T00:26:22.215152146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:26:22.215694 containerd[1974]: time="2025-05-17T00:26:22.215530401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.211909355s" May 17 00:26:22.215694 containerd[1974]: time="2025-05-17T00:26:22.215561802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:26:22.234350 containerd[1974]: time="2025-05-17T00:26:22.234303775Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:22.235136 containerd[1974]: time="2025-05-17T00:26:22.234808531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:22.284768 containerd[1974]: time="2025-05-17T00:26:22.284717141Z" level=info msg="CreateContainer within sandbox \"cdf621924179ade85b5f7748f09b82c10817339a73bd22db337d104f8f78a04a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:26:22.356512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2562816083.mount: Deactivated successfully. May 17 00:26:22.365239 containerd[1974]: time="2025-05-17T00:26:22.365183494Z" level=info msg="CreateContainer within sandbox \"cdf621924179ade85b5f7748f09b82c10817339a73bd22db337d104f8f78a04a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ac42a82a61d2f910cba8531945cbd4cc55f9544dd00afd3ee66eb57d2755662b\"" May 17 00:26:22.366388 containerd[1974]: time="2025-05-17T00:26:22.366238582Z" level=info msg="StartContainer for \"ac42a82a61d2f910cba8531945cbd4cc55f9544dd00afd3ee66eb57d2755662b\"" May 17 00:26:22.495225 systemd[1]: Started cri-containerd-ac42a82a61d2f910cba8531945cbd4cc55f9544dd00afd3ee66eb57d2755662b.scope - libcontainer container ac42a82a61d2f910cba8531945cbd4cc55f9544dd00afd3ee66eb57d2755662b. May 17 00:26:22.542827 containerd[1974]: time="2025-05-17T00:26:22.542774811Z" level=info msg="StartContainer for \"ac42a82a61d2f910cba8531945cbd4cc55f9544dd00afd3ee66eb57d2755662b\" returns successfully" May 17 00:26:22.680759 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:26:22.682748 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:26:23.030753 containerd[1974]: time="2025-05-17T00:26:23.030371665Z" level=info msg="StopPodSandbox for \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\"" May 17 00:26:23.227586 kubelet[3186]: I0517 00:26:23.210358 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t2x7c" podStartSLOduration=2.386896906 podStartE2EDuration="18.193264459s" podCreationTimestamp="2025-05-17 00:26:05 +0000 UTC" firstStartedPulling="2025-05-17 00:26:06.429215254 +0000 UTC m=+22.743538296" lastFinishedPulling="2025-05-17 00:26:22.23558282 +0000 UTC m=+38.549905849" observedRunningTime="2025-05-17 00:26:23.181380758 +0000 UTC m=+39.495703805" watchObservedRunningTime="2025-05-17 00:26:23.193264459 +0000 UTC m=+39.507587503" May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.194 [INFO][4628] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.195 [INFO][4628] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" iface="eth0" netns="/var/run/netns/cni-4fb57157-c323-ec6a-8d5e-35432d4056b3" May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.195 [INFO][4628] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" iface="eth0" netns="/var/run/netns/cni-4fb57157-c323-ec6a-8d5e-35432d4056b3" May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.198 [INFO][4628] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" iface="eth0" netns="/var/run/netns/cni-4fb57157-c323-ec6a-8d5e-35432d4056b3" May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.198 [INFO][4628] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.198 [INFO][4628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.596 [INFO][4637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" HandleID="k8s-pod-network.91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" Workload="ip--172--31--23--228-k8s-whisker--79c6d7d9f5--8hp27-eth0" May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.604 [INFO][4637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.604 [INFO][4637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.629 [WARNING][4637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" HandleID="k8s-pod-network.91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" Workload="ip--172--31--23--228-k8s-whisker--79c6d7d9f5--8hp27-eth0" May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.630 [INFO][4637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" HandleID="k8s-pod-network.91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" Workload="ip--172--31--23--228-k8s-whisker--79c6d7d9f5--8hp27-eth0" May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.631 [INFO][4637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:23.635917 containerd[1974]: 2025-05-17 00:26:23.633 [INFO][4628] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:23.638926 containerd[1974]: time="2025-05-17T00:26:23.636307541Z" level=info msg="TearDown network for sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\" successfully" May 17 00:26:23.638926 containerd[1974]: time="2025-05-17T00:26:23.636345287Z" level=info msg="StopPodSandbox for \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\" returns successfully" May 17 00:26:23.639084 systemd[1]: run-netns-cni\x2d4fb57157\x2dc323\x2dec6a\x2d8d5e\x2d35432d4056b3.mount: Deactivated successfully. May 17 00:26:23.732080 kubelet[3186]: I0517 00:26:23.732037 3186 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-whisker-ca-bundle\") pod \"633b52bc-acea-4d38-819a-9ad0c0dcf6e5\" (UID: \"633b52bc-acea-4d38-819a-9ad0c0dcf6e5\") " May 17 00:26:23.732283 kubelet[3186]: I0517 00:26:23.732268 3186 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plv5x\" (UniqueName: \"kubernetes.io/projected/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-kube-api-access-plv5x\") pod \"633b52bc-acea-4d38-819a-9ad0c0dcf6e5\" (UID: \"633b52bc-acea-4d38-819a-9ad0c0dcf6e5\") " May 17 00:26:23.732356 kubelet[3186]: I0517 00:26:23.732348 3186 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-whisker-backend-key-pair\") pod \"633b52bc-acea-4d38-819a-9ad0c0dcf6e5\" (UID: \"633b52bc-acea-4d38-819a-9ad0c0dcf6e5\") " May 17 00:26:23.743073 kubelet[3186]: I0517 00:26:23.741683 3186 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "633b52bc-acea-4d38-819a-9ad0c0dcf6e5" (UID: "633b52bc-acea-4d38-819a-9ad0c0dcf6e5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:26:23.746276 systemd[1]: var-lib-kubelet-pods-633b52bc\x2dacea\x2d4d38\x2d819a\x2d9ad0c0dcf6e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dplv5x.mount: Deactivated successfully. May 17 00:26:23.746663 kubelet[3186]: I0517 00:26:23.746531 3186 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-kube-api-access-plv5x" (OuterVolumeSpecName: "kube-api-access-plv5x") pod "633b52bc-acea-4d38-819a-9ad0c0dcf6e5" (UID: "633b52bc-acea-4d38-819a-9ad0c0dcf6e5"). InnerVolumeSpecName "kube-api-access-plv5x". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:26:23.748394 kubelet[3186]: I0517 00:26:23.748366 3186 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "633b52bc-acea-4d38-819a-9ad0c0dcf6e5" (UID: "633b52bc-acea-4d38-819a-9ad0c0dcf6e5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:26:23.750576 systemd[1]: var-lib-kubelet-pods-633b52bc\x2dacea\x2d4d38\x2d819a\x2d9ad0c0dcf6e5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:26:23.826834 systemd[1]: Removed slice kubepods-besteffort-pod633b52bc_acea_4d38_819a_9ad0c0dcf6e5.slice - libcontainer container kubepods-besteffort-pod633b52bc_acea_4d38_819a_9ad0c0dcf6e5.slice. May 17 00:26:23.839937 kubelet[3186]: I0517 00:26:23.839473 3186 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-whisker-backend-key-pair\") on node \"ip-172-31-23-228\" DevicePath \"\"" May 17 00:26:23.839937 kubelet[3186]: I0517 00:26:23.839503 3186 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-whisker-ca-bundle\") on node \"ip-172-31-23-228\" DevicePath \"\"" May 17 00:26:23.839937 kubelet[3186]: I0517 00:26:23.839513 3186 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plv5x\" (UniqueName: \"kubernetes.io/projected/633b52bc-acea-4d38-819a-9ad0c0dcf6e5-kube-api-access-plv5x\") on node \"ip-172-31-23-228\" DevicePath \"\"" May 17 00:26:24.164914 kubelet[3186]: I0517 00:26:24.164756 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:26:24.298297 systemd[1]: Created slice kubepods-besteffort-pod35be9fdc_2ef5_4d7b_b281_9e429560f362.slice - libcontainer container kubepods-besteffort-pod35be9fdc_2ef5_4d7b_b281_9e429560f362.slice. May 17 00:26:24.343076 kubelet[3186]: I0517 00:26:24.343039 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/35be9fdc-2ef5-4d7b-b281-9e429560f362-whisker-backend-key-pair\") pod \"whisker-78df879455-m7stx\" (UID: \"35be9fdc-2ef5-4d7b-b281-9e429560f362\") " pod="calico-system/whisker-78df879455-m7stx" May 17 00:26:24.343809 kubelet[3186]: I0517 00:26:24.343717 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dbcj\" (UniqueName: \"kubernetes.io/projected/35be9fdc-2ef5-4d7b-b281-9e429560f362-kube-api-access-8dbcj\") pod \"whisker-78df879455-m7stx\" (UID: \"35be9fdc-2ef5-4d7b-b281-9e429560f362\") " pod="calico-system/whisker-78df879455-m7stx" May 17 00:26:24.343809 kubelet[3186]: I0517 00:26:24.343749 3186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35be9fdc-2ef5-4d7b-b281-9e429560f362-whisker-ca-bundle\") pod \"whisker-78df879455-m7stx\" (UID: \"35be9fdc-2ef5-4d7b-b281-9e429560f362\") " pod="calico-system/whisker-78df879455-m7stx" May 17 00:26:24.604452 containerd[1974]: time="2025-05-17T00:26:24.604389709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78df879455-m7stx,Uid:35be9fdc-2ef5-4d7b-b281-9e429560f362,Namespace:calico-system,Attempt:0,}" May 17 00:26:24.770040 (udev-worker)[4596]: Network interface NamePolicy= disabled on kernel command line. May 17 00:26:24.777695 systemd-networkd[1893]: calid33deb1fe77: Link UP May 17 00:26:24.778849 systemd-networkd[1893]: calid33deb1fe77: Gained carrier May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.662 [INFO][4739] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.673 [INFO][4739] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0 whisker-78df879455- calico-system 35be9fdc-2ef5-4d7b-b281-9e429560f362 918 0 2025-05-17 00:26:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78df879455 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-23-228 whisker-78df879455-m7stx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid33deb1fe77 [] [] }} ContainerID="c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" Namespace="calico-system" Pod="whisker-78df879455-m7stx" WorkloadEndpoint="ip--172--31--23--228-k8s-whisker--78df879455--m7stx-" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.673 [INFO][4739] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" Namespace="calico-system" Pod="whisker-78df879455-m7stx" WorkloadEndpoint="ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.706 [INFO][4751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" HandleID="k8s-pod-network.c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" Workload="ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.706 [INFO][4751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" HandleID="k8s-pod-network.c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" Workload="ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c96f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-228", "pod":"whisker-78df879455-m7stx", "timestamp":"2025-05-17 00:26:24.706399316 +0000 UTC"}, Hostname:"ip-172-31-23-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.706 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.706 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.706 [INFO][4751] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-228' May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.715 [INFO][4751] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" host="ip-172-31-23-228" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.729 [INFO][4751] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-228" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.736 [INFO][4751] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.738 [INFO][4751] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.740 [INFO][4751] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.740 [INFO][4751] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" host="ip-172-31-23-228" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.742 [INFO][4751] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.748 [INFO][4751] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" host="ip-172-31-23-228" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.753 [INFO][4751] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.129/26] block=192.168.17.128/26 handle="k8s-pod-network.c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" host="ip-172-31-23-228" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.753 [INFO][4751] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.129/26] handle="k8s-pod-network.c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" host="ip-172-31-23-228" May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.753 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:24.797925 containerd[1974]: 2025-05-17 00:26:24.753 [INFO][4751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.129/26] IPv6=[] ContainerID="c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" HandleID="k8s-pod-network.c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" Workload="ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0" May 17 00:26:24.799726 containerd[1974]: 2025-05-17 00:26:24.757 [INFO][4739] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" Namespace="calico-system" Pod="whisker-78df879455-m7stx" WorkloadEndpoint="ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0", GenerateName:"whisker-78df879455-", Namespace:"calico-system", SelfLink:"", UID:"35be9fdc-2ef5-4d7b-b281-9e429560f362", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78df879455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"", Pod:"whisker-78df879455-m7stx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.17.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid33deb1fe77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:24.799726 containerd[1974]: 2025-05-17 00:26:24.758 [INFO][4739] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.129/32] ContainerID="c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" Namespace="calico-system" Pod="whisker-78df879455-m7stx" WorkloadEndpoint="ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0" May 17 00:26:24.799726 containerd[1974]: 2025-05-17 00:26:24.758 [INFO][4739] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid33deb1fe77 ContainerID="c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" Namespace="calico-system" Pod="whisker-78df879455-m7stx" WorkloadEndpoint="ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0" May 17 00:26:24.799726 containerd[1974]: 2025-05-17 00:26:24.778 [INFO][4739] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" Namespace="calico-system" Pod="whisker-78df879455-m7stx" WorkloadEndpoint="ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0" May 17 00:26:24.799726 containerd[1974]: 2025-05-17 00:26:24.778 [INFO][4739] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" Namespace="calico-system" Pod="whisker-78df879455-m7stx" WorkloadEndpoint="ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0", GenerateName:"whisker-78df879455-", Namespace:"calico-system", SelfLink:"", UID:"35be9fdc-2ef5-4d7b-b281-9e429560f362", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78df879455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac", Pod:"whisker-78df879455-m7stx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.17.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid33deb1fe77", MAC:"b2:39:3d:92:7a:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:24.799726 containerd[1974]: 2025-05-17 00:26:24.793 [INFO][4739] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac" Namespace="calico-system" Pod="whisker-78df879455-m7stx" WorkloadEndpoint="ip--172--31--23--228-k8s-whisker--78df879455--m7stx-eth0" May 17 00:26:24.827801 containerd[1974]: time="2025-05-17T00:26:24.827551848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:24.828226 containerd[1974]: time="2025-05-17T00:26:24.827723949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:24.828226 containerd[1974]: time="2025-05-17T00:26:24.827767135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:24.828226 containerd[1974]: time="2025-05-17T00:26:24.827873590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:24.864628 systemd[1]: Started cri-containerd-c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac.scope - libcontainer container c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac. May 17 00:26:24.925715 containerd[1974]: time="2025-05-17T00:26:24.924784546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78df879455-m7stx,Uid:35be9fdc-2ef5-4d7b-b281-9e429560f362,Namespace:calico-system,Attempt:0,} returns sandbox id \"c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac\"" May 17 00:26:24.927495 containerd[1974]: time="2025-05-17T00:26:24.927053596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:26:25.104843 containerd[1974]: time="2025-05-17T00:26:25.104773197Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:25.106966 containerd[1974]: time="2025-05-17T00:26:25.106890479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:25.107242 containerd[1974]: time="2025-05-17T00:26:25.106992946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:26:25.107287 kubelet[3186]: E0517 00:26:25.107193 3186 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:25.108583 kubelet[3186]: E0517 00:26:25.108530 3186 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:25.142891 kubelet[3186]: E0517 00:26:25.138143 3186 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f414e32a0f9e4b79b1ec41579d3d8398,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8dbcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78df879455-m7stx_calico-system(35be9fdc-2ef5-4d7b-b281-9e429560f362): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:25.144968 containerd[1974]: time="2025-05-17T00:26:25.144934547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:26:25.333022 containerd[1974]: time="2025-05-17T00:26:25.332926948Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:25.335069 containerd[1974]: time="2025-05-17T00:26:25.335019057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:25.335251 containerd[1974]: time="2025-05-17T00:26:25.335109316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:26:25.335301 kubelet[3186]: E0517 00:26:25.335254 3186 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:25.335379 kubelet[3186]: E0517 00:26:25.335307 3186 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:25.335484 kubelet[3186]: E0517 00:26:25.335410 3186 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dbcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78df879455-m7stx_calico-system(35be9fdc-2ef5-4d7b-b281-9e429560f362): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:25.340098 kubelet[3186]: E0517 00:26:25.340005 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-78df879455-m7stx" podUID="35be9fdc-2ef5-4d7b-b281-9e429560f362" May 17 00:26:25.472276 systemd[1]: run-containerd-runc-k8s.io-c569e04e6c7527793ceff5658e16a910cfdc41a665ed2d4ced663af89c881bac-runc.plDoJm.mount: Deactivated successfully. May 17 00:26:25.796687 kubelet[3186]: I0517 00:26:25.796648 3186 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633b52bc-acea-4d38-819a-9ad0c0dcf6e5" path="/var/lib/kubelet/pods/633b52bc-acea-4d38-819a-9ad0c0dcf6e5/volumes" May 17 00:26:25.966641 systemd-networkd[1893]: calid33deb1fe77: Gained IPv6LL May 17 00:26:26.165238 kubelet[3186]: E0517 00:26:26.165127 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-78df879455-m7stx" podUID="35be9fdc-2ef5-4d7b-b281-9e429560f362" May 17 00:26:27.464789 systemd[1]: Started sshd@9-172.31.23.228:22-147.75.109.163:45752.service - OpenSSH per-connection server daemon (147.75.109.163:45752). May 17 00:26:27.547578 kubelet[3186]: I0517 00:26:27.547536 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:26:27.682680 sshd[4848]: Accepted publickey for core from 147.75.109.163 port 45752 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:27.685884 sshd[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:27.693045 systemd-logind[1955]: New session 10 of user core. May 17 00:26:27.695615 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:26:27.818968 containerd[1974]: time="2025-05-17T00:26:27.799018396Z" level=info msg="StopPodSandbox for \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\"" May 17 00:26:27.820043 containerd[1974]: time="2025-05-17T00:26:27.799097186Z" level=info msg="StopPodSandbox for \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\"" May 17 00:26:27.985459 kubelet[3186]: I0517 00:26:27.985253 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:27.944 [INFO][4929] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:27.945 [INFO][4929] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" iface="eth0" netns="/var/run/netns/cni-dec775f6-808d-dae1-fed9-887c24c358e9" May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:27.946 [INFO][4929] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" iface="eth0" netns="/var/run/netns/cni-dec775f6-808d-dae1-fed9-887c24c358e9" May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:27.947 [INFO][4929] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" iface="eth0" netns="/var/run/netns/cni-dec775f6-808d-dae1-fed9-887c24c358e9" May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:27.947 [INFO][4929] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:27.947 [INFO][4929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:28.038 [INFO][4948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" HandleID="k8s-pod-network.7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:28.038 [INFO][4948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:28.038 [INFO][4948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:28.067 [WARNING][4948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" HandleID="k8s-pod-network.7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:28.067 [INFO][4948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" HandleID="k8s-pod-network.7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:28.080 [INFO][4948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:28.090929 containerd[1974]: 2025-05-17 00:26:28.086 [INFO][4929] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:28.097603 containerd[1974]: time="2025-05-17T00:26:28.094506616Z" level=info msg="TearDown network for sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\" successfully" May 17 00:26:28.097603 containerd[1974]: time="2025-05-17T00:26:28.094542754Z" level=info msg="StopPodSandbox for \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\" returns successfully" May 17 00:26:28.098140 containerd[1974]: time="2025-05-17T00:26:28.098099911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f57798587-cckmk,Uid:fb1a5092-3a29-4f17-a060-ae80b6cdd361,Namespace:calico-apiserver,Attempt:1,}" May 17 00:26:28.100090 systemd[1]: run-netns-cni\x2ddec775f6\x2d808d\x2ddae1\x2dfed9\x2d887c24c358e9.mount: Deactivated successfully. May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:27.992 [INFO][4938] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:27.995 [INFO][4938] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" iface="eth0" netns="/var/run/netns/cni-1325051e-1cb8-6ff5-60ea-58829d7948ea" May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:27.995 [INFO][4938] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" iface="eth0" netns="/var/run/netns/cni-1325051e-1cb8-6ff5-60ea-58829d7948ea" May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:27.996 [INFO][4938] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" iface="eth0" netns="/var/run/netns/cni-1325051e-1cb8-6ff5-60ea-58829d7948ea" May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:27.996 [INFO][4938] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:27.996 [INFO][4938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:28.090 [INFO][4957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" HandleID="k8s-pod-network.61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:28.091 [INFO][4957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:28.092 [INFO][4957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:28.113 [WARNING][4957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" HandleID="k8s-pod-network.61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:28.113 [INFO][4957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" HandleID="k8s-pod-network.61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:28.116 [INFO][4957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:28.123962 containerd[1974]: 2025-05-17 00:26:28.120 [INFO][4938] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:28.125416 containerd[1974]: time="2025-05-17T00:26:28.125316377Z" level=info msg="TearDown network for sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\" successfully" May 17 00:26:28.125416 containerd[1974]: time="2025-05-17T00:26:28.125345987Z" level=info msg="StopPodSandbox for \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\" returns successfully" May 17 00:26:28.128119 containerd[1974]: time="2025-05-17T00:26:28.128086803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f57798587-b77km,Uid:a481fc84-c654-4b7a-8053-527437371f0f,Namespace:calico-apiserver,Attempt:1,}" May 17 00:26:28.131179 systemd[1]: run-netns-cni\x2d1325051e\x2d1cb8\x2d6ff5\x2d60ea\x2d58829d7948ea.mount: Deactivated successfully. May 17 00:26:28.373175 systemd-networkd[1893]: cali25e046d9e65: Link UP May 17 00:26:28.376320 (udev-worker)[5008]: Network interface NamePolicy= disabled on kernel command line. May 17 00:26:28.379100 systemd-networkd[1893]: cali25e046d9e65: Gained carrier May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.183 [INFO][4966] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.215 [INFO][4966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0 calico-apiserver-5f57798587- calico-apiserver fb1a5092-3a29-4f17-a060-ae80b6cdd361 975 0 2025-05-17 00:26:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f57798587 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-228 calico-apiserver-5f57798587-cckmk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali25e046d9e65 [] [] }} ContainerID="b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-cckmk" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.215 [INFO][4966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-cckmk" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.297 [INFO][4991] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" HandleID="k8s-pod-network.b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.297 [INFO][4991] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" HandleID="k8s-pod-network.b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-228", "pod":"calico-apiserver-5f57798587-cckmk", "timestamp":"2025-05-17 00:26:28.29714296 +0000 UTC"}, Hostname:"ip-172-31-23-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.297 [INFO][4991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.299 [INFO][4991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.299 [INFO][4991] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-228' May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.311 [INFO][4991] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" host="ip-172-31-23-228" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.324 [INFO][4991] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-228" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.341 [INFO][4991] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.343 [INFO][4991] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.346 [INFO][4991] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.346 [INFO][4991] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" host="ip-172-31-23-228" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.348 [INFO][4991] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.354 [INFO][4991] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" host="ip-172-31-23-228" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.363 [INFO][4991] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.130/26] block=192.168.17.128/26 handle="k8s-pod-network.b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" host="ip-172-31-23-228" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.363 [INFO][4991] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.130/26] handle="k8s-pod-network.b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" host="ip-172-31-23-228" May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.363 [INFO][4991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:28.401688 containerd[1974]: 2025-05-17 00:26:28.363 [INFO][4991] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.130/26] IPv6=[] ContainerID="b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" HandleID="k8s-pod-network.b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:28.403030 containerd[1974]: 2025-05-17 00:26:28.366 [INFO][4966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-cckmk" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0", GenerateName:"calico-apiserver-5f57798587-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb1a5092-3a29-4f17-a060-ae80b6cdd361", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f57798587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"", Pod:"calico-apiserver-5f57798587-cckmk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25e046d9e65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:28.403030 containerd[1974]: 2025-05-17 00:26:28.366 [INFO][4966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.130/32] ContainerID="b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-cckmk" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:28.403030 containerd[1974]: 2025-05-17 00:26:28.367 [INFO][4966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25e046d9e65 ContainerID="b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-cckmk" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:28.403030 containerd[1974]: 2025-05-17 00:26:28.378 [INFO][4966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-cckmk" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:28.403030 containerd[1974]: 2025-05-17 00:26:28.380 [INFO][4966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-cckmk" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0", GenerateName:"calico-apiserver-5f57798587-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb1a5092-3a29-4f17-a060-ae80b6cdd361", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f57798587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf", Pod:"calico-apiserver-5f57798587-cckmk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25e046d9e65", MAC:"5a:17:f5:33:04:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:28.403030 containerd[1974]: 2025-05-17 00:26:28.397 [INFO][4966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-cckmk" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:28.439684 containerd[1974]: time="2025-05-17T00:26:28.429693431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:28.439684 containerd[1974]: time="2025-05-17T00:26:28.437727551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:28.439684 containerd[1974]: time="2025-05-17T00:26:28.437759053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:28.439684 containerd[1974]: time="2025-05-17T00:26:28.438005916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:28.472691 systemd[1]: Started cri-containerd-b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf.scope - libcontainer container b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf. May 17 00:26:28.488217 (udev-worker)[5010]: Network interface NamePolicy= disabled on kernel command line. May 17 00:26:28.491528 systemd-networkd[1893]: calic71a016879e: Link UP May 17 00:26:28.493357 systemd-networkd[1893]: calic71a016879e: Gained carrier May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.220 [INFO][4978] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.247 [INFO][4978] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0 calico-apiserver-5f57798587- calico-apiserver a481fc84-c654-4b7a-8053-527437371f0f 976 0 2025-05-17 00:26:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f57798587 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-228 calico-apiserver-5f57798587-b77km eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic71a016879e [] [] }} ContainerID="5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-b77km" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.247 [INFO][4978] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-b77km" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.338 [INFO][4999] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" HandleID="k8s-pod-network.5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.340 [INFO][4999] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" HandleID="k8s-pod-network.5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039da40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-228", "pod":"calico-apiserver-5f57798587-b77km", "timestamp":"2025-05-17 00:26:28.338912794 +0000 UTC"}, Hostname:"ip-172-31-23-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.340 [INFO][4999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.363 [INFO][4999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.364 [INFO][4999] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-228' May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.413 [INFO][4999] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" host="ip-172-31-23-228" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.424 [INFO][4999] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-228" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.435 [INFO][4999] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.439 [INFO][4999] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.442 [INFO][4999] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.442 [INFO][4999] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" host="ip-172-31-23-228" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.443 [INFO][4999] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07 May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.460 [INFO][4999] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" host="ip-172-31-23-228" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.476 [INFO][4999] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.131/26] block=192.168.17.128/26 handle="k8s-pod-network.5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" host="ip-172-31-23-228" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.476 [INFO][4999] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.131/26] handle="k8s-pod-network.5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" host="ip-172-31-23-228" May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.476 [INFO][4999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:28.534670 containerd[1974]: 2025-05-17 00:26:28.476 [INFO][4999] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.131/26] IPv6=[] ContainerID="5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" HandleID="k8s-pod-network.5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:28.536233 containerd[1974]: 2025-05-17 00:26:28.483 [INFO][4978] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-b77km" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0", GenerateName:"calico-apiserver-5f57798587-", Namespace:"calico-apiserver", SelfLink:"", UID:"a481fc84-c654-4b7a-8053-527437371f0f", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f57798587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"", Pod:"calico-apiserver-5f57798587-b77km", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic71a016879e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:28.536233 containerd[1974]: 2025-05-17 00:26:28.483 [INFO][4978] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.131/32] ContainerID="5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-b77km" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:28.536233 containerd[1974]: 2025-05-17 00:26:28.483 [INFO][4978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic71a016879e ContainerID="5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-b77km" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:28.536233 containerd[1974]: 2025-05-17 00:26:28.499 [INFO][4978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-b77km" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:28.536233 containerd[1974]: 2025-05-17 00:26:28.509 [INFO][4978] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-b77km" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0", GenerateName:"calico-apiserver-5f57798587-", Namespace:"calico-apiserver", SelfLink:"", UID:"a481fc84-c654-4b7a-8053-527437371f0f", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f57798587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07", Pod:"calico-apiserver-5f57798587-b77km", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic71a016879e", MAC:"56:60:b6:c1:aa:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:28.536233 containerd[1974]: 2025-05-17 00:26:28.529 [INFO][4978] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07" Namespace="calico-apiserver" Pod="calico-apiserver-5f57798587-b77km" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:28.613615 containerd[1974]: time="2025-05-17T00:26:28.612329182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:28.613615 containerd[1974]: time="2025-05-17T00:26:28.612389823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:28.613615 containerd[1974]: time="2025-05-17T00:26:28.612407381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:28.613615 containerd[1974]: time="2025-05-17T00:26:28.612539486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:28.624729 containerd[1974]: time="2025-05-17T00:26:28.624225860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f57798587-cckmk,Uid:fb1a5092-3a29-4f17-a060-ae80b6cdd361,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf\"" May 17 00:26:28.656456 sshd[4848]: pam_unix(sshd:session): session closed for user core May 17 00:26:28.668515 containerd[1974]: time="2025-05-17T00:26:28.668010304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:26:28.671375 systemd[1]: sshd@9-172.31.23.228:22-147.75.109.163:45752.service: Deactivated successfully. May 17 00:26:28.676040 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:26:28.680210 systemd-logind[1955]: Session 10 logged out. Waiting for processes to exit. May 17 00:26:28.686635 systemd[1]: Started cri-containerd-5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07.scope - libcontainer container 5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07. May 17 00:26:28.689679 systemd-logind[1955]: Removed session 10. May 17 00:26:28.786775 containerd[1974]: time="2025-05-17T00:26:28.786622060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f57798587-b77km,Uid:a481fc84-c654-4b7a-8053-527437371f0f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07\"" May 17 00:26:28.795527 containerd[1974]: time="2025-05-17T00:26:28.795223536Z" level=info msg="StopPodSandbox for \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\"" May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.858 [INFO][5121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.858 [INFO][5121] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" iface="eth0" netns="/var/run/netns/cni-5495f9b2-9abd-d521-865b-e8899c963bec" May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.858 [INFO][5121] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" iface="eth0" netns="/var/run/netns/cni-5495f9b2-9abd-d521-865b-e8899c963bec" May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.859 [INFO][5121] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" iface="eth0" netns="/var/run/netns/cni-5495f9b2-9abd-d521-865b-e8899c963bec" May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.859 [INFO][5121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.859 [INFO][5121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.886 [INFO][5128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" HandleID="k8s-pod-network.a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.886 [INFO][5128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.886 [INFO][5128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.894 [WARNING][5128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" HandleID="k8s-pod-network.a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.894 [INFO][5128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" HandleID="k8s-pod-network.a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.897 [INFO][5128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:28.901594 containerd[1974]: 2025-05-17 00:26:28.899 [INFO][5121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:28.906296 containerd[1974]: time="2025-05-17T00:26:28.901655618Z" level=info msg="TearDown network for sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\" successfully" May 17 00:26:28.906296 containerd[1974]: time="2025-05-17T00:26:28.901680790Z" level=info msg="StopPodSandbox for \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\" returns successfully" May 17 00:26:28.906296 containerd[1974]: time="2025-05-17T00:26:28.902380362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5sswv,Uid:7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c,Namespace:kube-system,Attempt:1,}" May 17 00:26:28.905994 systemd[1]: run-netns-cni\x2d5495f9b2\x2d9abd\x2dd521\x2d865b\x2de8899c963bec.mount: Deactivated successfully. May 17 00:26:28.935192 kernel: bpftool[5148]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:26:29.088781 systemd-networkd[1893]: cali9698d724ce9: Link UP May 17 00:26:29.089595 systemd-networkd[1893]: cali9698d724ce9: Gained carrier May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:28.981 [INFO][5153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0 coredns-7c65d6cfc9- kube-system 7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c 1002 0 2025-05-17 00:25:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-228 coredns-7c65d6cfc9-5sswv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9698d724ce9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5sswv" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:28.981 [INFO][5153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5sswv" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.021 [INFO][5162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" HandleID="k8s-pod-network.8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.022 [INFO][5162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" HandleID="k8s-pod-network.8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9060), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-228", "pod":"coredns-7c65d6cfc9-5sswv", "timestamp":"2025-05-17 00:26:29.020972286 +0000 UTC"}, Hostname:"ip-172-31-23-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.022 [INFO][5162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.022 [INFO][5162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.022 [INFO][5162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-228' May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.029 [INFO][5162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" host="ip-172-31-23-228" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.034 [INFO][5162] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-228" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.040 [INFO][5162] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.042 [INFO][5162] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.044 [INFO][5162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.044 [INFO][5162] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" host="ip-172-31-23-228" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.046 [INFO][5162] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.057 [INFO][5162] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" host="ip-172-31-23-228" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.067 [INFO][5162] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.132/26] block=192.168.17.128/26 handle="k8s-pod-network.8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" host="ip-172-31-23-228" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.067 [INFO][5162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.132/26] handle="k8s-pod-network.8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" host="ip-172-31-23-228" May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.067 [INFO][5162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:29.108121 containerd[1974]: 2025-05-17 00:26:29.067 [INFO][5162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.132/26] IPv6=[] ContainerID="8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" HandleID="k8s-pod-network.8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:29.110926 containerd[1974]: 2025-05-17 00:26:29.072 [INFO][5153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5sswv" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"", Pod:"coredns-7c65d6cfc9-5sswv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9698d724ce9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:29.110926 containerd[1974]: 2025-05-17 00:26:29.072 [INFO][5153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.132/32] ContainerID="8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5sswv" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:29.110926 containerd[1974]: 2025-05-17 00:26:29.072 [INFO][5153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9698d724ce9 ContainerID="8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5sswv" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:29.110926 containerd[1974]: 2025-05-17 00:26:29.088 [INFO][5153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5sswv" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:29.110926 containerd[1974]: 2025-05-17 00:26:29.088 [INFO][5153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5sswv" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf", Pod:"coredns-7c65d6cfc9-5sswv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9698d724ce9", MAC:"d6:da:82:5d:44:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:29.110926 containerd[1974]: 2025-05-17 00:26:29.104 [INFO][5153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5sswv" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:29.147911 containerd[1974]: time="2025-05-17T00:26:29.147732225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:29.147911 containerd[1974]: time="2025-05-17T00:26:29.147815077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:29.147911 containerd[1974]: time="2025-05-17T00:26:29.147826571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:29.148393 containerd[1974]: time="2025-05-17T00:26:29.148053948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:29.184363 systemd[1]: Started cri-containerd-8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf.scope - libcontainer container 8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf. May 17 00:26:29.237453 containerd[1974]: time="2025-05-17T00:26:29.237392690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5sswv,Uid:7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c,Namespace:kube-system,Attempt:1,} returns sandbox id \"8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf\"" May 17 00:26:29.264900 containerd[1974]: time="2025-05-17T00:26:29.264756879Z" level=info msg="CreateContainer within sandbox \"8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:26:29.303009 containerd[1974]: time="2025-05-17T00:26:29.302974621Z" level=info msg="CreateContainer within sandbox \"8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1104c866002810482168077bae00cb62a269d9c9d49c0cc93bbbc5ba00874da\"" May 17 00:26:29.308492 containerd[1974]: time="2025-05-17T00:26:29.307933855Z" level=info msg="StartContainer for \"d1104c866002810482168077bae00cb62a269d9c9d49c0cc93bbbc5ba00874da\"" May 17 00:26:29.343674 systemd[1]: Started cri-containerd-d1104c866002810482168077bae00cb62a269d9c9d49c0cc93bbbc5ba00874da.scope - libcontainer container d1104c866002810482168077bae00cb62a269d9c9d49c0cc93bbbc5ba00874da. May 17 00:26:29.397236 containerd[1974]: time="2025-05-17T00:26:29.397194983Z" level=info msg="StartContainer for \"d1104c866002810482168077bae00cb62a269d9c9d49c0cc93bbbc5ba00874da\" returns successfully" May 17 00:26:29.508532 systemd-networkd[1893]: vxlan.calico: Link UP May 17 00:26:29.508544 systemd-networkd[1893]: vxlan.calico: Gained carrier May 17 00:26:29.614886 systemd-networkd[1893]: cali25e046d9e65: Gained IPv6LL May 17 00:26:30.234282 kubelet[3186]: I0517 00:26:30.234208 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5sswv" podStartSLOduration=41.234164982 podStartE2EDuration="41.234164982s" podCreationTimestamp="2025-05-17 00:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:26:30.219913414 +0000 UTC m=+46.534236466" watchObservedRunningTime="2025-05-17 00:26:30.234164982 +0000 UTC m=+46.548488032" May 17 00:26:30.512704 systemd-networkd[1893]: calic71a016879e: Gained IPv6LL May 17 00:26:30.574632 systemd-networkd[1893]: vxlan.calico: Gained IPv6LL May 17 00:26:30.795688 containerd[1974]: time="2025-05-17T00:26:30.795076914Z" level=info msg="StopPodSandbox for \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\"" May 17 00:26:30.797691 containerd[1974]: time="2025-05-17T00:26:30.795823941Z" level=info msg="StopPodSandbox for \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\"" May 17 00:26:30.797691 containerd[1974]: time="2025-05-17T00:26:30.796609662Z" level=info msg="StopPodSandbox for \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\"" May 17 00:26:31.087264 systemd-networkd[1893]: cali9698d724ce9: Gained IPv6LL May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.012 [INFO][5402] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.014 [INFO][5402] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" iface="eth0" netns="/var/run/netns/cni-ac4fc3a8-8d02-0365-af40-b30d3ea4a42a" May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.014 [INFO][5402] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" iface="eth0" netns="/var/run/netns/cni-ac4fc3a8-8d02-0365-af40-b30d3ea4a42a" May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.015 [INFO][5402] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" iface="eth0" netns="/var/run/netns/cni-ac4fc3a8-8d02-0365-af40-b30d3ea4a42a" May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.015 [INFO][5402] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.015 [INFO][5402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.091 [INFO][5427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" HandleID="k8s-pod-network.11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.092 [INFO][5427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.092 [INFO][5427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.110 [WARNING][5427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" HandleID="k8s-pod-network.11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.110 [INFO][5427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" HandleID="k8s-pod-network.11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.113 [INFO][5427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:31.141827 containerd[1974]: 2025-05-17 00:26:31.130 [INFO][5402] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:31.145985 containerd[1974]: time="2025-05-17T00:26:31.144359688Z" level=info msg="TearDown network for sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\" successfully" May 17 00:26:31.145985 containerd[1974]: time="2025-05-17T00:26:31.144399294Z" level=info msg="StopPodSandbox for \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\" returns successfully" May 17 00:26:31.151551 containerd[1974]: time="2025-05-17T00:26:31.150775833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b48558975-rlpts,Uid:8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5,Namespace:calico-system,Attempt:1,}" May 17 00:26:31.151261 systemd[1]: run-netns-cni\x2dac4fc3a8\x2d8d02\x2d0365\x2daf40\x2db30d3ea4a42a.mount: Deactivated successfully. May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:30.972 [INFO][5400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:30.972 [INFO][5400] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" iface="eth0" netns="/var/run/netns/cni-cbcf1db3-1675-0762-ab86-c0530fffff9b" May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:30.975 [INFO][5400] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" iface="eth0" netns="/var/run/netns/cni-cbcf1db3-1675-0762-ab86-c0530fffff9b" May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:30.978 [INFO][5400] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" iface="eth0" netns="/var/run/netns/cni-cbcf1db3-1675-0762-ab86-c0530fffff9b" May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:30.978 [INFO][5400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:30.978 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:31.108 [INFO][5420] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" HandleID="k8s-pod-network.c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:31.109 [INFO][5420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:31.113 [INFO][5420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:31.149 [WARNING][5420] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" HandleID="k8s-pod-network.c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:31.150 [INFO][5420] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" HandleID="k8s-pod-network.c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:31.158 [INFO][5420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:31.177443 containerd[1974]: 2025-05-17 00:26:31.170 [INFO][5400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:31.181137 containerd[1974]: time="2025-05-17T00:26:31.179042566Z" level=info msg="TearDown network for sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\" successfully" May 17 00:26:31.181137 containerd[1974]: time="2025-05-17T00:26:31.179075473Z" level=info msg="StopPodSandbox for \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\" returns successfully" May 17 00:26:31.185997 systemd[1]: run-netns-cni\x2dcbcf1db3\x2d1675\x2d0762\x2dab86\x2dc0530fffff9b.mount: Deactivated successfully. May 17 00:26:31.189245 containerd[1974]: time="2025-05-17T00:26:31.189033902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-ffn6m,Uid:6c6feb10-5e10-4718-ae2e-34e0ec7b697f,Namespace:calico-system,Attempt:1,}" May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.003 [INFO][5401] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.003 [INFO][5401] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" iface="eth0" netns="/var/run/netns/cni-ed519b33-3653-7d97-9a7b-9d50e7399c2e" May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.005 [INFO][5401] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" iface="eth0" netns="/var/run/netns/cni-ed519b33-3653-7d97-9a7b-9d50e7399c2e" May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.006 [INFO][5401] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" iface="eth0" netns="/var/run/netns/cni-ed519b33-3653-7d97-9a7b-9d50e7399c2e" May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.006 [INFO][5401] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.006 [INFO][5401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.136 [INFO][5425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" HandleID="k8s-pod-network.cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.141 [INFO][5425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.159 [INFO][5425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.188 [WARNING][5425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" HandleID="k8s-pod-network.cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.190 [INFO][5425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" HandleID="k8s-pod-network.cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.195 [INFO][5425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:31.204386 containerd[1974]: 2025-05-17 00:26:31.200 [INFO][5401] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:31.206031 containerd[1974]: time="2025-05-17T00:26:31.205211655Z" level=info msg="TearDown network for sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\" successfully" May 17 00:26:31.206031 containerd[1974]: time="2025-05-17T00:26:31.205241341Z" level=info msg="StopPodSandbox for \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\" returns successfully" May 17 00:26:31.214507 containerd[1974]: time="2025-05-17T00:26:31.211937129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jsbnl,Uid:9832b571-3650-40cf-9d5c-47e3967ad978,Namespace:kube-system,Attempt:1,}" May 17 00:26:31.213100 systemd[1]: run-netns-cni\x2ded519b33\x2d3653\x2d7d97\x2d9a7b\x2d9d50e7399c2e.mount: Deactivated successfully. May 17 00:26:31.551840 (udev-worker)[5316]: Network interface NamePolicy= disabled on kernel command line. May 17 00:26:31.554499 systemd-networkd[1893]: cali31a86991182: Link UP May 17 00:26:31.556652 systemd-networkd[1893]: cali31a86991182: Gained carrier May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.356 [INFO][5453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0 goldmane-8f77d7b6c- calico-system 6c6feb10-5e10-4718-ae2e-34e0ec7b697f 1031 0 2025-05-17 00:26:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-23-228 goldmane-8f77d7b6c-ffn6m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali31a86991182 [] [] }} ContainerID="3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" Namespace="calico-system" Pod="goldmane-8f77d7b6c-ffn6m" WorkloadEndpoint="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.356 [INFO][5453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" Namespace="calico-system" Pod="goldmane-8f77d7b6c-ffn6m" WorkloadEndpoint="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.436 [INFO][5483] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" HandleID="k8s-pod-network.3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.436 [INFO][5483] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" HandleID="k8s-pod-network.3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000e2430), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-228", "pod":"goldmane-8f77d7b6c-ffn6m", "timestamp":"2025-05-17 00:26:31.436513562 +0000 UTC"}, Hostname:"ip-172-31-23-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.436 [INFO][5483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.436 [INFO][5483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.436 [INFO][5483] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-228' May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.453 [INFO][5483] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" host="ip-172-31-23-228" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.475 [INFO][5483] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-228" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.494 [INFO][5483] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.498 [INFO][5483] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.502 [INFO][5483] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.502 [INFO][5483] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" host="ip-172-31-23-228" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.504 [INFO][5483] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.512 [INFO][5483] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" host="ip-172-31-23-228" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.527 [INFO][5483] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.133/26] block=192.168.17.128/26 handle="k8s-pod-network.3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" host="ip-172-31-23-228" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.527 [INFO][5483] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.133/26] handle="k8s-pod-network.3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" host="ip-172-31-23-228" May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.527 [INFO][5483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:31.604137 containerd[1974]: 2025-05-17 00:26:31.527 [INFO][5483] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.133/26] IPv6=[] ContainerID="3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" HandleID="k8s-pod-network.3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:31.605134 containerd[1974]: 2025-05-17 00:26:31.538 [INFO][5453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" Namespace="calico-system" Pod="goldmane-8f77d7b6c-ffn6m" WorkloadEndpoint="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"6c6feb10-5e10-4718-ae2e-34e0ec7b697f", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"", Pod:"goldmane-8f77d7b6c-ffn6m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31a86991182", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:31.605134 containerd[1974]: 2025-05-17 00:26:31.540 [INFO][5453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.133/32] ContainerID="3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" Namespace="calico-system" Pod="goldmane-8f77d7b6c-ffn6m" WorkloadEndpoint="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:31.605134 containerd[1974]: 2025-05-17 00:26:31.541 [INFO][5453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31a86991182 ContainerID="3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" Namespace="calico-system" Pod="goldmane-8f77d7b6c-ffn6m" WorkloadEndpoint="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:31.605134 containerd[1974]: 2025-05-17 00:26:31.558 [INFO][5453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" Namespace="calico-system" Pod="goldmane-8f77d7b6c-ffn6m" WorkloadEndpoint="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:31.605134 containerd[1974]: 2025-05-17 00:26:31.560 [INFO][5453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" Namespace="calico-system" Pod="goldmane-8f77d7b6c-ffn6m" WorkloadEndpoint="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"6c6feb10-5e10-4718-ae2e-34e0ec7b697f", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d", Pod:"goldmane-8f77d7b6c-ffn6m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31a86991182", MAC:"7a:f5:ef:41:98:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:31.605134 containerd[1974]: 2025-05-17 00:26:31.589 [INFO][5453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d" Namespace="calico-system" Pod="goldmane-8f77d7b6c-ffn6m" WorkloadEndpoint="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:31.664079 systemd-networkd[1893]: cali92564b274da: Link UP May 17 00:26:31.670555 systemd-networkd[1893]: cali92564b274da: Gained carrier May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.354 [INFO][5443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0 calico-kube-controllers-6b48558975- calico-system 8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5 1033 0 2025-05-17 00:26:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b48558975 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-228 calico-kube-controllers-6b48558975-rlpts eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali92564b274da [] [] }} ContainerID="8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" Namespace="calico-system" Pod="calico-kube-controllers-6b48558975-rlpts" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.355 [INFO][5443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" Namespace="calico-system" Pod="calico-kube-controllers-6b48558975-rlpts" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.491 [INFO][5482] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" HandleID="k8s-pod-network.8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.492 [INFO][5482] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" HandleID="k8s-pod-network.8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e390), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-228", "pod":"calico-kube-controllers-6b48558975-rlpts", "timestamp":"2025-05-17 00:26:31.490637578 +0000 UTC"}, Hostname:"ip-172-31-23-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.492 [INFO][5482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.528 [INFO][5482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.528 [INFO][5482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-228' May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.554 [INFO][5482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" host="ip-172-31-23-228" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.580 [INFO][5482] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-228" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.601 [INFO][5482] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.608 [INFO][5482] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.612 [INFO][5482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.612 [INFO][5482] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" host="ip-172-31-23-228" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.615 [INFO][5482] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2 May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.630 [INFO][5482] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" host="ip-172-31-23-228" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.646 [INFO][5482] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.134/26] block=192.168.17.128/26 handle="k8s-pod-network.8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" host="ip-172-31-23-228" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.646 [INFO][5482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.134/26] handle="k8s-pod-network.8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" host="ip-172-31-23-228" May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.647 [INFO][5482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:31.744166 containerd[1974]: 2025-05-17 00:26:31.647 [INFO][5482] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.134/26] IPv6=[] ContainerID="8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" HandleID="k8s-pod-network.8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:31.745251 containerd[1974]: 2025-05-17 00:26:31.656 [INFO][5443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" Namespace="calico-system" Pod="calico-kube-controllers-6b48558975-rlpts" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0", GenerateName:"calico-kube-controllers-6b48558975-", Namespace:"calico-system", SelfLink:"", UID:"8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b48558975", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"", Pod:"calico-kube-controllers-6b48558975-rlpts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92564b274da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:31.745251 containerd[1974]: 2025-05-17 00:26:31.657 [INFO][5443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.134/32] ContainerID="8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" Namespace="calico-system" Pod="calico-kube-controllers-6b48558975-rlpts" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:31.745251 containerd[1974]: 2025-05-17 00:26:31.657 [INFO][5443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92564b274da ContainerID="8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" Namespace="calico-system" Pod="calico-kube-controllers-6b48558975-rlpts" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:31.745251 containerd[1974]: 2025-05-17 00:26:31.680 [INFO][5443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" Namespace="calico-system" Pod="calico-kube-controllers-6b48558975-rlpts" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:31.745251 containerd[1974]: 2025-05-17 00:26:31.686 [INFO][5443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" Namespace="calico-system" Pod="calico-kube-controllers-6b48558975-rlpts" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0", GenerateName:"calico-kube-controllers-6b48558975-", Namespace:"calico-system", SelfLink:"", UID:"8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b48558975", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2", Pod:"calico-kube-controllers-6b48558975-rlpts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92564b274da", MAC:"ae:d9:be:13:da:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:31.745251 containerd[1974]: 2025-05-17 00:26:31.730 [INFO][5443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2" Namespace="calico-system" Pod="calico-kube-controllers-6b48558975-rlpts" WorkloadEndpoint="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:31.800336 containerd[1974]: time="2025-05-17T00:26:31.800293996Z" level=info msg="StopPodSandbox for \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\"" May 17 00:26:31.827231 systemd-networkd[1893]: cali9a2da7e438b: Link UP May 17 00:26:31.827703 systemd-networkd[1893]: cali9a2da7e438b: Gained carrier May 17 00:26:31.854536 containerd[1974]: time="2025-05-17T00:26:31.853707696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:31.854536 containerd[1974]: time="2025-05-17T00:26:31.853788755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:31.857996 containerd[1974]: time="2025-05-17T00:26:31.853823982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:31.857996 containerd[1974]: time="2025-05-17T00:26:31.853994121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:31.861450 containerd[1974]: time="2025-05-17T00:26:31.861172174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:31.861450 containerd[1974]: time="2025-05-17T00:26:31.861263439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:31.861450 containerd[1974]: time="2025-05-17T00:26:31.861282839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:31.866285 containerd[1974]: time="2025-05-17T00:26:31.866127039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.411 [INFO][5459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0 coredns-7c65d6cfc9- kube-system 9832b571-3650-40cf-9d5c-47e3967ad978 1032 0 2025-05-17 00:25:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-228 coredns-7c65d6cfc9-jsbnl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9a2da7e438b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jsbnl" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.413 [INFO][5459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jsbnl" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.552 [INFO][5493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" HandleID="k8s-pod-network.ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.553 [INFO][5493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" HandleID="k8s-pod-network.ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000382a90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-228", "pod":"coredns-7c65d6cfc9-jsbnl", "timestamp":"2025-05-17 00:26:31.552722425 +0000 UTC"}, Hostname:"ip-172-31-23-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.553 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.647 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.647 [INFO][5493] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-228' May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.675 [INFO][5493] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" host="ip-172-31-23-228" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.705 [INFO][5493] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-228" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.728 [INFO][5493] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.736 [INFO][5493] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.746 [INFO][5493] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.746 [INFO][5493] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" host="ip-172-31-23-228" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.755 [INFO][5493] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20 May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.772 [INFO][5493] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" host="ip-172-31-23-228" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.800 [INFO][5493] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.135/26] block=192.168.17.128/26 handle="k8s-pod-network.ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" host="ip-172-31-23-228" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.802 [INFO][5493] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.135/26] handle="k8s-pod-network.ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" host="ip-172-31-23-228" May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.802 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:31.887343 containerd[1974]: 2025-05-17 00:26:31.802 [INFO][5493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.135/26] IPv6=[] ContainerID="ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" HandleID="k8s-pod-network.ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:31.889728 containerd[1974]: 2025-05-17 00:26:31.816 [INFO][5459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jsbnl" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9832b571-3650-40cf-9d5c-47e3967ad978", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"", Pod:"coredns-7c65d6cfc9-jsbnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a2da7e438b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:31.889728 containerd[1974]: 2025-05-17 00:26:31.817 [INFO][5459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.135/32] ContainerID="ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jsbnl" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:31.889728 containerd[1974]: 2025-05-17 00:26:31.818 [INFO][5459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a2da7e438b ContainerID="ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jsbnl" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:31.889728 containerd[1974]: 2025-05-17 00:26:31.828 [INFO][5459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jsbnl" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:31.889728 containerd[1974]: 2025-05-17 00:26:31.838 [INFO][5459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jsbnl" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9832b571-3650-40cf-9d5c-47e3967ad978", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20", Pod:"coredns-7c65d6cfc9-jsbnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a2da7e438b", MAC:"d6:1a:0b:6f:5e:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:31.889728 containerd[1974]: 2025-05-17 00:26:31.875 [INFO][5459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jsbnl" WorkloadEndpoint="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:31.976298 containerd[1974]: time="2025-05-17T00:26:31.975597165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:31.980408 containerd[1974]: time="2025-05-17T00:26:31.976733052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:31.980408 containerd[1974]: time="2025-05-17T00:26:31.979869308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:31.980408 containerd[1974]: time="2025-05-17T00:26:31.980010340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:31.986706 systemd[1]: Started cri-containerd-8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2.scope - libcontainer container 8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2. May 17 00:26:32.018673 systemd[1]: Started cri-containerd-3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d.scope - libcontainer container 3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d. May 17 00:26:32.033624 systemd[1]: Started cri-containerd-ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20.scope - libcontainer container ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20. May 17 00:26:32.176533 containerd[1974]: time="2025-05-17T00:26:32.175920320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jsbnl,Uid:9832b571-3650-40cf-9d5c-47e3967ad978,Namespace:kube-system,Attempt:1,} returns sandbox id \"ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20\"" May 17 00:26:32.193049 containerd[1974]: time="2025-05-17T00:26:32.193005984Z" level=info msg="CreateContainer within sandbox \"ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.016 [INFO][5559] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.016 [INFO][5559] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" iface="eth0" netns="/var/run/netns/cni-7e23262f-cc54-fe12-5dbf-53fc93d07dde" May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.018 [INFO][5559] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" iface="eth0" netns="/var/run/netns/cni-7e23262f-cc54-fe12-5dbf-53fc93d07dde" May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.020 [INFO][5559] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" iface="eth0" netns="/var/run/netns/cni-7e23262f-cc54-fe12-5dbf-53fc93d07dde" May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.021 [INFO][5559] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.021 [INFO][5559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.147 [INFO][5630] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" HandleID="k8s-pod-network.afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.147 [INFO][5630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.148 [INFO][5630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.191 [WARNING][5630] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" HandleID="k8s-pod-network.afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.191 [INFO][5630] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" HandleID="k8s-pod-network.afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.195 [INFO][5630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:32.207885 containerd[1974]: 2025-05-17 00:26:32.201 [INFO][5559] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:32.217542 containerd[1974]: time="2025-05-17T00:26:32.216615084Z" level=info msg="TearDown network for sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\" successfully" May 17 00:26:32.217542 containerd[1974]: time="2025-05-17T00:26:32.216663281Z" level=info msg="StopPodSandbox for \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\" returns successfully" May 17 00:26:32.220602 systemd[1]: run-netns-cni\x2d7e23262f\x2dcc54\x2dfe12\x2d5dbf\x2d53fc93d07dde.mount: Deactivated successfully. May 17 00:26:32.224566 containerd[1974]: time="2025-05-17T00:26:32.224526538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gndq,Uid:d1caf04b-d279-4556-9507-efceb97ef03e,Namespace:calico-system,Attempt:1,}" May 17 00:26:32.273160 containerd[1974]: time="2025-05-17T00:26:32.273006474Z" level=info msg="CreateContainer within sandbox \"ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fc77bf3d310bea011662bde4d6d45bddd50f40342ecfac61feebb7e45111f32\"" May 17 00:26:32.279340 containerd[1974]: time="2025-05-17T00:26:32.275139185Z" level=info msg="StartContainer for \"4fc77bf3d310bea011662bde4d6d45bddd50f40342ecfac61feebb7e45111f32\"" May 17 00:26:32.403002 containerd[1974]: time="2025-05-17T00:26:32.402955288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b48558975-rlpts,Uid:8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2\"" May 17 00:26:32.465694 systemd[1]: Started cri-containerd-4fc77bf3d310bea011662bde4d6d45bddd50f40342ecfac61feebb7e45111f32.scope - libcontainer container 4fc77bf3d310bea011662bde4d6d45bddd50f40342ecfac61feebb7e45111f32. May 17 00:26:32.550033 containerd[1974]: time="2025-05-17T00:26:32.549571853Z" level=info msg="StartContainer for \"4fc77bf3d310bea011662bde4d6d45bddd50f40342ecfac61feebb7e45111f32\" returns successfully" May 17 00:26:32.601840 containerd[1974]: time="2025-05-17T00:26:32.601346340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-ffn6m,Uid:6c6feb10-5e10-4718-ae2e-34e0ec7b697f,Namespace:calico-system,Attempt:1,} returns sandbox id \"3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d\"" May 17 00:26:32.687662 containerd[1974]: time="2025-05-17T00:26:32.686547804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:32.689105 containerd[1974]: time="2025-05-17T00:26:32.689055730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:26:32.694019 containerd[1974]: time="2025-05-17T00:26:32.693976687Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:32.699874 containerd[1974]: time="2025-05-17T00:26:32.699829541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:32.701059 containerd[1974]: time="2025-05-17T00:26:32.700938749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 4.032430086s" May 17 00:26:32.701244 containerd[1974]: time="2025-05-17T00:26:32.701211272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:26:32.714734 containerd[1974]: time="2025-05-17T00:26:32.714700080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:26:32.722799 containerd[1974]: time="2025-05-17T00:26:32.722691988Z" level=info msg="CreateContainer within sandbox \"b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:26:32.749993 systemd-networkd[1893]: cali292b362b30c: Link UP May 17 00:26:32.751173 systemd-networkd[1893]: cali292b362b30c: Gained carrier May 17 00:26:32.751600 systemd-networkd[1893]: cali31a86991182: Gained IPv6LL May 17 00:26:32.801166 containerd[1974]: time="2025-05-17T00:26:32.800910042Z" level=info msg="CreateContainer within sandbox \"b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7fb0385fa42afbe0668dc53e91675fa60f0315134b39f4b5fa3a43bec99dd3ec\"" May 17 00:26:32.801841 containerd[1974]: time="2025-05-17T00:26:32.801810303Z" level=info msg="StartContainer for \"7fb0385fa42afbe0668dc53e91675fa60f0315134b39f4b5fa3a43bec99dd3ec\"" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.424 [INFO][5667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0 csi-node-driver- calico-system d1caf04b-d279-4556-9507-efceb97ef03e 1050 0 2025-05-17 00:26:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-228 csi-node-driver-4gndq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali292b362b30c [] [] }} ContainerID="62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" Namespace="calico-system" Pod="csi-node-driver-4gndq" WorkloadEndpoint="ip--172--31--23--228-k8s-csi--node--driver--4gndq-" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.429 [INFO][5667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" Namespace="calico-system" Pod="csi-node-driver-4gndq" WorkloadEndpoint="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.589 [INFO][5702] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" HandleID="k8s-pod-network.62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.591 [INFO][5702] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" HandleID="k8s-pod-network.62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122580), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-228", "pod":"csi-node-driver-4gndq", "timestamp":"2025-05-17 00:26:32.589016946 +0000 UTC"}, Hostname:"ip-172-31-23-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.591 [INFO][5702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.591 [INFO][5702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.591 [INFO][5702] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-228' May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.618 [INFO][5702] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" host="ip-172-31-23-228" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.627 [INFO][5702] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-228" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.644 [INFO][5702] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.648 [INFO][5702] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.655 [INFO][5702] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ip-172-31-23-228" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.655 [INFO][5702] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" host="ip-172-31-23-228" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.663 [INFO][5702] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01 May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.690 [INFO][5702] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" host="ip-172-31-23-228" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.725 [INFO][5702] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.136/26] block=192.168.17.128/26 handle="k8s-pod-network.62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" host="ip-172-31-23-228" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.725 [INFO][5702] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.136/26] handle="k8s-pod-network.62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" host="ip-172-31-23-228" May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.726 [INFO][5702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:32.838451 containerd[1974]: 2025-05-17 00:26:32.726 [INFO][5702] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.136/26] IPv6=[] ContainerID="62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" HandleID="k8s-pod-network.62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:32.842082 containerd[1974]: 2025-05-17 00:26:32.735 [INFO][5667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" Namespace="calico-system" Pod="csi-node-driver-4gndq" WorkloadEndpoint="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1caf04b-d279-4556-9507-efceb97ef03e", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"", Pod:"csi-node-driver-4gndq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali292b362b30c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:32.842082 containerd[1974]: 2025-05-17 00:26:32.735 [INFO][5667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.136/32] ContainerID="62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" Namespace="calico-system" Pod="csi-node-driver-4gndq" WorkloadEndpoint="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:32.842082 containerd[1974]: 2025-05-17 00:26:32.735 [INFO][5667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali292b362b30c ContainerID="62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" Namespace="calico-system" Pod="csi-node-driver-4gndq" WorkloadEndpoint="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:32.842082 containerd[1974]: 2025-05-17 00:26:32.762 [INFO][5667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" Namespace="calico-system" Pod="csi-node-driver-4gndq" WorkloadEndpoint="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:32.842082 containerd[1974]: 2025-05-17 00:26:32.777 [INFO][5667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" Namespace="calico-system" Pod="csi-node-driver-4gndq" WorkloadEndpoint="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1caf04b-d279-4556-9507-efceb97ef03e", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01", Pod:"csi-node-driver-4gndq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali292b362b30c", MAC:"9e:d8:c7:34:1b:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:32.842082 containerd[1974]: 2025-05-17 00:26:32.824 [INFO][5667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01" Namespace="calico-system" Pod="csi-node-driver-4gndq" WorkloadEndpoint="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:32.896084 systemd[1]: Started cri-containerd-7fb0385fa42afbe0668dc53e91675fa60f0315134b39f4b5fa3a43bec99dd3ec.scope - libcontainer container 7fb0385fa42afbe0668dc53e91675fa60f0315134b39f4b5fa3a43bec99dd3ec. May 17 00:26:32.934626 containerd[1974]: time="2025-05-17T00:26:32.934275464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:26:32.934626 containerd[1974]: time="2025-05-17T00:26:32.934347192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:26:32.934626 containerd[1974]: time="2025-05-17T00:26:32.934363168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:32.934626 containerd[1974]: time="2025-05-17T00:26:32.934522771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:26:32.974480 systemd[1]: Started cri-containerd-62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01.scope - libcontainer container 62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01. May 17 00:26:33.067164 containerd[1974]: time="2025-05-17T00:26:33.066910210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gndq,Uid:d1caf04b-d279-4556-9507-efceb97ef03e,Namespace:calico-system,Attempt:1,} returns sandbox id \"62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01\"" May 17 00:26:33.104698 containerd[1974]: time="2025-05-17T00:26:33.104638118Z" level=info msg="StartContainer for \"7fb0385fa42afbe0668dc53e91675fa60f0315134b39f4b5fa3a43bec99dd3ec\" returns successfully" May 17 00:26:33.106765 containerd[1974]: time="2025-05-17T00:26:33.106722512Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:33.108667 containerd[1974]: time="2025-05-17T00:26:33.108480379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:26:33.115147 containerd[1974]: time="2025-05-17T00:26:33.115025318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 400.10628ms" May 17 00:26:33.115147 containerd[1974]: time="2025-05-17T00:26:33.115090298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:26:33.120056 containerd[1974]: time="2025-05-17T00:26:33.119994550Z" level=info msg="CreateContainer within sandbox \"5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:26:33.120547 containerd[1974]: time="2025-05-17T00:26:33.120511775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:26:33.174076 containerd[1974]: time="2025-05-17T00:26:33.174026938Z" level=info msg="CreateContainer within sandbox \"5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"329d2566bc50cf14093599478d39940982088c43b9bccc3110402487473ffc9f\"" May 17 00:26:33.177795 containerd[1974]: time="2025-05-17T00:26:33.176004926Z" level=info msg="StartContainer for \"329d2566bc50cf14093599478d39940982088c43b9bccc3110402487473ffc9f\"" May 17 00:26:33.252647 systemd[1]: Started cri-containerd-329d2566bc50cf14093599478d39940982088c43b9bccc3110402487473ffc9f.scope - libcontainer container 329d2566bc50cf14093599478d39940982088c43b9bccc3110402487473ffc9f. May 17 00:26:33.262665 systemd-networkd[1893]: cali92564b274da: Gained IPv6LL May 17 00:26:33.389962 kubelet[3186]: I0517 00:26:33.389894 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f57798587-cckmk" podStartSLOduration=28.342776308 podStartE2EDuration="32.389869963s" podCreationTimestamp="2025-05-17 00:26:01 +0000 UTC" firstStartedPulling="2025-05-17 00:26:28.666498155 +0000 UTC m=+44.980821196" lastFinishedPulling="2025-05-17 00:26:32.713591811 +0000 UTC m=+49.027914851" observedRunningTime="2025-05-17 00:26:33.344944109 +0000 UTC m=+49.659267158" watchObservedRunningTime="2025-05-17 00:26:33.389869963 +0000 UTC m=+49.704193012" May 17 00:26:33.393337 kubelet[3186]: I0517 00:26:33.392406 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jsbnl" podStartSLOduration=44.392383957 podStartE2EDuration="44.392383957s" podCreationTimestamp="2025-05-17 00:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:26:33.389555863 +0000 UTC m=+49.703878913" watchObservedRunningTime="2025-05-17 00:26:33.392383957 +0000 UTC m=+49.706707009" May 17 00:26:33.452362 containerd[1974]: time="2025-05-17T00:26:33.452197687Z" level=info msg="StartContainer for \"329d2566bc50cf14093599478d39940982088c43b9bccc3110402487473ffc9f\" returns successfully" May 17 00:26:33.686310 systemd[1]: Started sshd@10-172.31.23.228:22-147.75.109.163:38558.service - OpenSSH per-connection server daemon (147.75.109.163:38558). May 17 00:26:33.775061 systemd-networkd[1893]: cali9a2da7e438b: Gained IPv6LL May 17 00:26:33.839037 systemd-networkd[1893]: cali292b362b30c: Gained IPv6LL May 17 00:26:33.960598 sshd[5870]: Accepted publickey for core from 147.75.109.163 port 38558 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:33.965952 sshd[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:33.974657 systemd-logind[1955]: New session 11 of user core. May 17 00:26:33.983625 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:26:34.366270 kubelet[3186]: I0517 00:26:34.365471 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:26:35.173335 sshd[5870]: pam_unix(sshd:session): session closed for user core May 17 00:26:35.178842 systemd[1]: sshd@10-172.31.23.228:22-147.75.109.163:38558.service: Deactivated successfully. May 17 00:26:35.179764 systemd-logind[1955]: Session 11 logged out. Waiting for processes to exit. May 17 00:26:35.186168 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:26:35.188976 systemd-logind[1955]: Removed session 11. May 17 00:26:35.976178 kubelet[3186]: I0517 00:26:35.975910 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f57798587-b77km" podStartSLOduration=30.64807992 podStartE2EDuration="34.97588946s" podCreationTimestamp="2025-05-17 00:26:01 +0000 UTC" firstStartedPulling="2025-05-17 00:26:28.788718862 +0000 UTC m=+45.103041902" lastFinishedPulling="2025-05-17 00:26:33.116528196 +0000 UTC m=+49.430851442" observedRunningTime="2025-05-17 00:26:34.373376255 +0000 UTC m=+50.687699303" watchObservedRunningTime="2025-05-17 00:26:35.97588946 +0000 UTC m=+52.290212501" May 17 00:26:36.635677 ntpd[1949]: Listen normally on 8 vxlan.calico 192.168.17.128:123 May 17 00:26:36.636140 ntpd[1949]: 17 May 00:26:36 ntpd[1949]: Listen normally on 8 vxlan.calico 192.168.17.128:123 May 17 00:26:36.636140 ntpd[1949]: 17 May 00:26:36 ntpd[1949]: Listen normally on 9 calid33deb1fe77 [fe80::ecee:eeff:feee:eeee%4]:123 May 17 00:26:36.636140 ntpd[1949]: 17 May 00:26:36 ntpd[1949]: Listen normally on 10 cali25e046d9e65 [fe80::ecee:eeff:feee:eeee%5]:123 May 17 00:26:36.636140 ntpd[1949]: 17 May 00:26:36 ntpd[1949]: Listen normally on 11 calic71a016879e [fe80::ecee:eeff:feee:eeee%6]:123 May 17 00:26:36.636140 ntpd[1949]: 17 May 00:26:36 ntpd[1949]: Listen normally on 12 cali9698d724ce9 [fe80::ecee:eeff:feee:eeee%7]:123 May 17 00:26:36.636140 ntpd[1949]: 17 May 00:26:36 ntpd[1949]: Listen normally on 13 vxlan.calico [fe80::64ac:ecff:fe0e:5ec8%8]:123 May 17 00:26:36.636140 ntpd[1949]: 17 May 00:26:36 ntpd[1949]: Listen normally on 14 cali31a86991182 [fe80::ecee:eeff:feee:eeee%11]:123 May 17 00:26:36.636140 ntpd[1949]: 17 May 00:26:36 ntpd[1949]: Listen normally on 15 cali92564b274da [fe80::ecee:eeff:feee:eeee%12]:123 May 17 00:26:36.636140 ntpd[1949]: 17 May 00:26:36 ntpd[1949]: Listen normally on 16 cali9a2da7e438b [fe80::ecee:eeff:feee:eeee%13]:123 May 17 00:26:36.636140 ntpd[1949]: 17 May 00:26:36 ntpd[1949]: Listen normally on 17 cali292b362b30c [fe80::ecee:eeff:feee:eeee%14]:123 May 17 00:26:36.635750 ntpd[1949]: Listen normally on 9 calid33deb1fe77 [fe80::ecee:eeff:feee:eeee%4]:123 May 17 00:26:36.635795 ntpd[1949]: Listen normally on 10 cali25e046d9e65 [fe80::ecee:eeff:feee:eeee%5]:123 May 17 00:26:36.635824 ntpd[1949]: Listen normally on 11 calic71a016879e [fe80::ecee:eeff:feee:eeee%6]:123 May 17 00:26:36.635852 ntpd[1949]: Listen normally on 12 cali9698d724ce9 [fe80::ecee:eeff:feee:eeee%7]:123 May 17 00:26:36.635880 ntpd[1949]: Listen normally on 13 vxlan.calico [fe80::64ac:ecff:fe0e:5ec8%8]:123 May 17 00:26:36.635908 ntpd[1949]: Listen normally on 14 cali31a86991182 [fe80::ecee:eeff:feee:eeee%11]:123 May 17 00:26:36.635935 ntpd[1949]: Listen normally on 15 cali92564b274da [fe80::ecee:eeff:feee:eeee%12]:123 May 17 00:26:36.635963 ntpd[1949]: Listen normally on 16 cali9a2da7e438b [fe80::ecee:eeff:feee:eeee%13]:123 May 17 00:26:36.635988 ntpd[1949]: Listen normally on 17 cali292b362b30c [fe80::ecee:eeff:feee:eeee%14]:123 May 17 00:26:39.270489 containerd[1974]: time="2025-05-17T00:26:39.270264199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:39.276561 containerd[1974]: time="2025-05-17T00:26:39.276485114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:26:39.286203 containerd[1974]: time="2025-05-17T00:26:39.285977489Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:39.294611 containerd[1974]: time="2025-05-17T00:26:39.294537949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:39.295666 containerd[1974]: time="2025-05-17T00:26:39.295294922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 6.174740805s" May 17 00:26:39.295666 containerd[1974]: time="2025-05-17T00:26:39.295330842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:26:39.324301 containerd[1974]: time="2025-05-17T00:26:39.324257825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:26:39.530739 containerd[1974]: time="2025-05-17T00:26:39.530616310Z" level=info msg="CreateContainer within sandbox \"8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:26:39.551713 containerd[1974]: time="2025-05-17T00:26:39.551554998Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:39.552988 containerd[1974]: time="2025-05-17T00:26:39.552860303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:39.597156 containerd[1974]: time="2025-05-17T00:26:39.597090273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:26:39.624136 containerd[1974]: time="2025-05-17T00:26:39.624009377Z" level=info msg="CreateContainer within sandbox \"8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"243bf5b7c3eddc37ca0610023c5129f407fe96dc0998c6cd517b878731f03e90\"" May 17 00:26:39.632518 kubelet[3186]: E0517 00:26:39.616324 3186 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:39.640478 containerd[1974]: time="2025-05-17T00:26:39.639588512Z" level=info msg="StartContainer for \"243bf5b7c3eddc37ca0610023c5129f407fe96dc0998c6cd517b878731f03e90\"" May 17 00:26:39.646997 kubelet[3186]: E0517 00:26:39.646952 3186 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:39.647675 containerd[1974]: time="2025-05-17T00:26:39.647597501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:26:39.766495 kubelet[3186]: E0517 00:26:39.740713 3186 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-296m4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-ffn6m_calico-system(6c6feb10-5e10-4718-ae2e-34e0ec7b697f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:39.789951 kubelet[3186]: E0517 00:26:39.789189 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-ffn6m" podUID="6c6feb10-5e10-4718-ae2e-34e0ec7b697f" May 17 00:26:39.793997 systemd[1]: Started cri-containerd-243bf5b7c3eddc37ca0610023c5129f407fe96dc0998c6cd517b878731f03e90.scope - libcontainer container 243bf5b7c3eddc37ca0610023c5129f407fe96dc0998c6cd517b878731f03e90. May 17 00:26:39.865993 containerd[1974]: time="2025-05-17T00:26:39.865959891Z" level=info msg="StartContainer for \"243bf5b7c3eddc37ca0610023c5129f407fe96dc0998c6cd517b878731f03e90\" returns successfully" May 17 00:26:40.211971 systemd[1]: Started sshd@11-172.31.23.228:22-147.75.109.163:48812.service - OpenSSH per-connection server daemon (147.75.109.163:48812). May 17 00:26:40.481960 sshd[5964]: Accepted publickey for core from 147.75.109.163 port 48812 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:40.487250 sshd[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:40.493481 systemd-logind[1955]: New session 12 of user core. May 17 00:26:40.497624 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:26:40.516410 kubelet[3186]: E0517 00:26:40.515674 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-ffn6m" podUID="6c6feb10-5e10-4718-ae2e-34e0ec7b697f" May 17 00:26:40.587361 kubelet[3186]: I0517 00:26:40.586206 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b48558975-rlpts" podStartSLOduration=27.681517678 podStartE2EDuration="34.586185209s" podCreationTimestamp="2025-05-17 00:26:06 +0000 UTC" firstStartedPulling="2025-05-17 00:26:32.406568429 +0000 UTC m=+48.720891473" lastFinishedPulling="2025-05-17 00:26:39.311235947 +0000 UTC m=+55.625559004" observedRunningTime="2025-05-17 00:26:40.572921639 +0000 UTC m=+56.887244688" watchObservedRunningTime="2025-05-17 00:26:40.586185209 +0000 UTC m=+56.900508257" May 17 00:26:41.275879 containerd[1974]: time="2025-05-17T00:26:41.275826009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:41.277482 containerd[1974]: time="2025-05-17T00:26:41.277436546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:26:41.279242 containerd[1974]: time="2025-05-17T00:26:41.279185004Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:41.281864 containerd[1974]: time="2025-05-17T00:26:41.281809174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:41.283374 containerd[1974]: time="2025-05-17T00:26:41.282386268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.634750962s" May 17 00:26:41.283374 containerd[1974]: time="2025-05-17T00:26:41.282419082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:26:41.289526 containerd[1974]: time="2025-05-17T00:26:41.289335472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:26:41.330436 containerd[1974]: time="2025-05-17T00:26:41.330094836Z" level=info msg="CreateContainer within sandbox \"62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:26:41.410652 containerd[1974]: time="2025-05-17T00:26:41.410565551Z" level=info msg="CreateContainer within sandbox \"62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"170a60e070605a67c9668d470d84ee7ae76e713634f3843c20c6c97aa4204b55\"" May 17 00:26:41.411672 containerd[1974]: time="2025-05-17T00:26:41.411481538Z" level=info msg="StartContainer for \"170a60e070605a67c9668d470d84ee7ae76e713634f3843c20c6c97aa4204b55\"" May 17 00:26:41.456958 systemd[1]: Started cri-containerd-170a60e070605a67c9668d470d84ee7ae76e713634f3843c20c6c97aa4204b55.scope - libcontainer container 170a60e070605a67c9668d470d84ee7ae76e713634f3843c20c6c97aa4204b55. May 17 00:26:41.504470 containerd[1974]: time="2025-05-17T00:26:41.504096953Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:41.506694 containerd[1974]: time="2025-05-17T00:26:41.506014364Z" level=info msg="StartContainer for \"170a60e070605a67c9668d470d84ee7ae76e713634f3843c20c6c97aa4204b55\" returns successfully" May 17 00:26:41.507396 containerd[1974]: time="2025-05-17T00:26:41.506933172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:41.507396 containerd[1974]: time="2025-05-17T00:26:41.506981842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:26:41.566456 kubelet[3186]: E0517 00:26:41.566223 3186 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:41.616199 kubelet[3186]: E0517 00:26:41.616156 3186 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:41.616712 kubelet[3186]: E0517 00:26:41.616662 3186 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f414e32a0f9e4b79b1ec41579d3d8398,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8dbcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78df879455-m7stx_calico-system(35be9fdc-2ef5-4d7b-b281-9e429560f362): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:41.618273 containerd[1974]: time="2025-05-17T00:26:41.617140673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:26:41.917365 sshd[5964]: pam_unix(sshd:session): session closed for user core May 17 00:26:41.921599 systemd[1]: sshd@11-172.31.23.228:22-147.75.109.163:48812.service: Deactivated successfully. May 17 00:26:41.923417 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:26:41.924307 systemd-logind[1955]: Session 12 logged out. Waiting for processes to exit. May 17 00:26:41.925645 systemd-logind[1955]: Removed session 12. May 17 00:26:41.948506 systemd[1]: Started sshd@12-172.31.23.228:22-147.75.109.163:48820.service - OpenSSH per-connection server daemon (147.75.109.163:48820). May 17 00:26:42.140987 sshd[6038]: Accepted publickey for core from 147.75.109.163 port 48820 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:42.142695 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:42.147489 systemd-logind[1955]: New session 13 of user core. May 17 00:26:42.154645 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:26:42.457112 sshd[6038]: pam_unix(sshd:session): session closed for user core May 17 00:26:42.461891 systemd-logind[1955]: Session 13 logged out. Waiting for processes to exit. May 17 00:26:42.462318 systemd[1]: sshd@12-172.31.23.228:22-147.75.109.163:48820.service: Deactivated successfully. May 17 00:26:42.464934 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:26:42.467939 systemd-logind[1955]: Removed session 13. May 17 00:26:42.497887 systemd[1]: Started sshd@13-172.31.23.228:22-147.75.109.163:48822.service - OpenSSH per-connection server daemon (147.75.109.163:48822). May 17 00:26:42.682710 sshd[6049]: Accepted publickey for core from 147.75.109.163 port 48822 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:42.683938 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:42.690007 systemd-logind[1955]: New session 14 of user core. May 17 00:26:42.696399 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:26:42.964672 sshd[6049]: pam_unix(sshd:session): session closed for user core May 17 00:26:42.972799 systemd[1]: sshd@13-172.31.23.228:22-147.75.109.163:48822.service: Deactivated successfully. May 17 00:26:42.975190 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:26:42.978524 systemd-logind[1955]: Session 14 logged out. Waiting for processes to exit. May 17 00:26:42.981202 systemd-logind[1955]: Removed session 14. May 17 00:26:43.728890 containerd[1974]: time="2025-05-17T00:26:43.728439397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:43.730902 containerd[1974]: time="2025-05-17T00:26:43.730826852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:26:43.734459 containerd[1974]: time="2025-05-17T00:26:43.733196380Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:43.736751 containerd[1974]: time="2025-05-17T00:26:43.736712079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:26:43.737488 containerd[1974]: time="2025-05-17T00:26:43.737276714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.119370554s" May 17 00:26:43.737564 containerd[1974]: time="2025-05-17T00:26:43.737495074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:26:43.760606 containerd[1974]: time="2025-05-17T00:26:43.760544345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:26:43.833004 containerd[1974]: time="2025-05-17T00:26:43.832848983Z" level=info msg="CreateContainer within sandbox \"62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:26:43.861641 containerd[1974]: time="2025-05-17T00:26:43.861587449Z" level=info msg="CreateContainer within sandbox \"62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5706ca713d77acc8c90856327fb1b320e27fb185ff7d1381b2e22e2108226a18\"" May 17 00:26:43.949638 containerd[1974]: time="2025-05-17T00:26:43.949603133Z" level=info msg="StartContainer for \"5706ca713d77acc8c90856327fb1b320e27fb185ff7d1381b2e22e2108226a18\"" May 17 00:26:43.992812 containerd[1974]: time="2025-05-17T00:26:43.992164436Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:43.996217 containerd[1974]: time="2025-05-17T00:26:43.996053899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:43.997824 containerd[1974]: time="2025-05-17T00:26:43.996314686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:26:44.084046 kubelet[3186]: E0517 00:26:44.083987 3186 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:44.084793 kubelet[3186]: E0517 00:26:44.084082 3186 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:44.084793 kubelet[3186]: E0517 00:26:44.084242 3186 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dbcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78df879455-m7stx_calico-system(35be9fdc-2ef5-4d7b-b281-9e429560f362): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:44.084652 systemd[1]: Started cri-containerd-5706ca713d77acc8c90856327fb1b320e27fb185ff7d1381b2e22e2108226a18.scope - libcontainer container 5706ca713d77acc8c90856327fb1b320e27fb185ff7d1381b2e22e2108226a18. May 17 00:26:44.143080 containerd[1974]: time="2025-05-17T00:26:44.141831357Z" level=info msg="StartContainer for \"5706ca713d77acc8c90856327fb1b320e27fb185ff7d1381b2e22e2108226a18\" returns successfully" May 17 00:26:44.143379 kubelet[3186]: E0517 00:26:44.091763 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-78df879455-m7stx" podUID="35be9fdc-2ef5-4d7b-b281-9e429560f362" May 17 00:26:44.180141 containerd[1974]: time="2025-05-17T00:26:44.180106594Z" level=info msg="StopPodSandbox for \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\"" May 17 00:26:45.242668 kubelet[3186]: I0517 00:26:45.229371 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4gndq" podStartSLOduration=28.541173401 podStartE2EDuration="39.219602739s" podCreationTimestamp="2025-05-17 00:26:06 +0000 UTC" firstStartedPulling="2025-05-17 00:26:33.072854579 +0000 UTC m=+49.387177619" lastFinishedPulling="2025-05-17 00:26:43.751283928 +0000 UTC m=+60.065606957" observedRunningTime="2025-05-17 00:26:45.091502969 +0000 UTC m=+61.405826019" watchObservedRunningTime="2025-05-17 00:26:45.219602739 +0000 UTC m=+61.533925789" May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:44.758 [WARNING][6117] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"6c6feb10-5e10-4718-ae2e-34e0ec7b697f", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d", Pod:"goldmane-8f77d7b6c-ffn6m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31a86991182", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:44.764 [INFO][6117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:44.765 [INFO][6117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" iface="eth0" netns="" May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:44.765 [INFO][6117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:44.765 [INFO][6117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:45.282 [INFO][6124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" HandleID="k8s-pod-network.c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:45.288 [INFO][6124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:45.288 [INFO][6124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:45.309 [WARNING][6124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" HandleID="k8s-pod-network.c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:45.309 [INFO][6124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" HandleID="k8s-pod-network.c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:45.313 [INFO][6124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:45.319298 containerd[1974]: 2025-05-17 00:26:45.316 [INFO][6117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:45.321985 containerd[1974]: time="2025-05-17T00:26:45.319563783Z" level=info msg="TearDown network for sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\" successfully" May 17 00:26:45.321985 containerd[1974]: time="2025-05-17T00:26:45.319593294Z" level=info msg="StopPodSandbox for \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\" returns successfully" May 17 00:26:45.405467 kubelet[3186]: I0517 00:26:45.405357 3186 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:26:45.430790 kubelet[3186]: I0517 00:26:45.430762 3186 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:26:45.461819 containerd[1974]: time="2025-05-17T00:26:45.461493103Z" level=info msg="RemovePodSandbox for \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\"" May 17 00:26:45.470725 containerd[1974]: time="2025-05-17T00:26:45.470663361Z" level=info msg="Forcibly stopping sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\"" May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.517 [WARNING][6138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"6c6feb10-5e10-4718-ae2e-34e0ec7b697f", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"3a4540d357e826bb101d78285ba7e3cad25625b0c017a99db8908c015d68d04d", Pod:"goldmane-8f77d7b6c-ffn6m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31a86991182", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.518 [INFO][6138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.518 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" iface="eth0" netns="" May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.518 [INFO][6138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.518 [INFO][6138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.558 [INFO][6145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" HandleID="k8s-pod-network.c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.558 [INFO][6145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.558 [INFO][6145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.566 [WARNING][6145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" HandleID="k8s-pod-network.c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.566 [INFO][6145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" HandleID="k8s-pod-network.c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" Workload="ip--172--31--23--228-k8s-goldmane--8f77d7b6c--ffn6m-eth0" May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.569 [INFO][6145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:45.576889 containerd[1974]: 2025-05-17 00:26:45.572 [INFO][6138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb" May 17 00:26:45.576889 containerd[1974]: time="2025-05-17T00:26:45.575624347Z" level=info msg="TearDown network for sandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\" successfully" May 17 00:26:45.600127 containerd[1974]: time="2025-05-17T00:26:45.600056956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:45.610059 containerd[1974]: time="2025-05-17T00:26:45.609994586Z" level=info msg="RemovePodSandbox \"c407d4a8632ea98c710faef5c4bedab2ebe11c2a3e6fa455427464ef178c8ffb\" returns successfully" May 17 00:26:45.614666 containerd[1974]: time="2025-05-17T00:26:45.614613882Z" level=info msg="StopPodSandbox for \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\"" May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.661 [WARNING][6159] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0", GenerateName:"calico-apiserver-5f57798587-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb1a5092-3a29-4f17-a060-ae80b6cdd361", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f57798587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf", Pod:"calico-apiserver-5f57798587-cckmk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25e046d9e65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.661 [INFO][6159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.661 [INFO][6159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" iface="eth0" netns="" May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.661 [INFO][6159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.661 [INFO][6159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.710 [INFO][6167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" HandleID="k8s-pod-network.7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.710 [INFO][6167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.710 [INFO][6167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.721 [WARNING][6167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" HandleID="k8s-pod-network.7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.721 [INFO][6167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" HandleID="k8s-pod-network.7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.723 [INFO][6167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:45.738490 containerd[1974]: 2025-05-17 00:26:45.727 [INFO][6159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:45.738490 containerd[1974]: time="2025-05-17T00:26:45.736829348Z" level=info msg="TearDown network for sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\" successfully" May 17 00:26:45.738490 containerd[1974]: time="2025-05-17T00:26:45.736865681Z" level=info msg="StopPodSandbox for \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\" returns successfully" May 17 00:26:45.742091 containerd[1974]: time="2025-05-17T00:26:45.739729074Z" level=info msg="RemovePodSandbox for \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\"" May 17 00:26:45.742091 containerd[1974]: time="2025-05-17T00:26:45.739768445Z" level=info msg="Forcibly stopping sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\"" May 17 00:26:45.749456 systemd[1]: run-containerd-runc-k8s.io-243bf5b7c3eddc37ca0610023c5129f407fe96dc0998c6cd517b878731f03e90-runc.aUbuG9.mount: Deactivated successfully. May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.802 [WARNING][6196] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0", GenerateName:"calico-apiserver-5f57798587-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb1a5092-3a29-4f17-a060-ae80b6cdd361", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f57798587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"b95cb90ef8faa3d3b24491ba3d895e60b1afe42966eaa742a44cb26d2b3dadbf", Pod:"calico-apiserver-5f57798587-cckmk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25e046d9e65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.802 [INFO][6196] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.802 [INFO][6196] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" iface="eth0" netns="" May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.802 [INFO][6196] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.802 [INFO][6196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.829 [INFO][6206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" HandleID="k8s-pod-network.7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.829 [INFO][6206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.830 [INFO][6206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.837 [WARNING][6206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" HandleID="k8s-pod-network.7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.837 [INFO][6206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" HandleID="k8s-pod-network.7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--cckmk-eth0" May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.838 [INFO][6206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:45.843476 containerd[1974]: 2025-05-17 00:26:45.841 [INFO][6196] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79" May 17 00:26:45.843476 containerd[1974]: time="2025-05-17T00:26:45.843416314Z" level=info msg="TearDown network for sandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\" successfully" May 17 00:26:45.853268 containerd[1974]: time="2025-05-17T00:26:45.853212410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:45.853378 containerd[1974]: time="2025-05-17T00:26:45.853289666Z" level=info msg="RemovePodSandbox \"7b88f9c38ab1f507caf053ed84cd7b57bf285e1b85c5bde389167157a0191f79\" returns successfully" May 17 00:26:45.853895 containerd[1974]: time="2025-05-17T00:26:45.853860488Z" level=info msg="StopPodSandbox for \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\"" May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.904 [WARNING][6220] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0", GenerateName:"calico-kube-controllers-6b48558975-", Namespace:"calico-system", SelfLink:"", UID:"8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b48558975", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2", Pod:"calico-kube-controllers-6b48558975-rlpts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92564b274da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.905 [INFO][6220] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.905 [INFO][6220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" iface="eth0" netns="" May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.905 [INFO][6220] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.905 [INFO][6220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.939 [INFO][6227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" HandleID="k8s-pod-network.11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.940 [INFO][6227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.940 [INFO][6227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.947 [WARNING][6227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" HandleID="k8s-pod-network.11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.947 [INFO][6227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" HandleID="k8s-pod-network.11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.948 [INFO][6227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:45.953105 containerd[1974]: 2025-05-17 00:26:45.951 [INFO][6220] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:45.954888 containerd[1974]: time="2025-05-17T00:26:45.953190887Z" level=info msg="TearDown network for sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\" successfully" May 17 00:26:45.954888 containerd[1974]: time="2025-05-17T00:26:45.953220556Z" level=info msg="StopPodSandbox for \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\" returns successfully" May 17 00:26:45.954888 containerd[1974]: time="2025-05-17T00:26:45.953891630Z" level=info msg="RemovePodSandbox for \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\"" May 17 00:26:45.954888 containerd[1974]: time="2025-05-17T00:26:45.953923724Z" level=info msg="Forcibly stopping sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\"" May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:45.990 [WARNING][6241] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0", GenerateName:"calico-kube-controllers-6b48558975-", Namespace:"calico-system", SelfLink:"", UID:"8a71913d-89eb-4c9e-9dfe-b7eb7c1fd2b5", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b48558975", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"8504e4aee11c90b5308be39bf92b3c6c21448c32c41d3c973d23127764ed2dd2", Pod:"calico-kube-controllers-6b48558975-rlpts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92564b274da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:45.990 [INFO][6241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:45.990 [INFO][6241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" iface="eth0" netns="" May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:45.990 [INFO][6241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:45.990 [INFO][6241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:46.018 [INFO][6249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" HandleID="k8s-pod-network.11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:46.018 [INFO][6249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:46.018 [INFO][6249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:46.025 [WARNING][6249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" HandleID="k8s-pod-network.11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:46.025 [INFO][6249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" HandleID="k8s-pod-network.11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" Workload="ip--172--31--23--228-k8s-calico--kube--controllers--6b48558975--rlpts-eth0" May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:46.034 [INFO][6249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:46.039485 containerd[1974]: 2025-05-17 00:26:46.036 [INFO][6241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215" May 17 00:26:46.039485 containerd[1974]: time="2025-05-17T00:26:46.038630099Z" level=info msg="TearDown network for sandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\" successfully" May 17 00:26:46.051327 containerd[1974]: time="2025-05-17T00:26:46.051092656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:46.051327 containerd[1974]: time="2025-05-17T00:26:46.051190884Z" level=info msg="RemovePodSandbox \"11819756d72959fc96a166adbd5f01c7d005db871ebea65b647d38db1ca6f215\" returns successfully" May 17 00:26:46.052005 containerd[1974]: time="2025-05-17T00:26:46.051923065Z" level=info msg="StopPodSandbox for \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\"" May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.092 [WARNING][6263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0", GenerateName:"calico-apiserver-5f57798587-", Namespace:"calico-apiserver", SelfLink:"", UID:"a481fc84-c654-4b7a-8053-527437371f0f", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f57798587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07", Pod:"calico-apiserver-5f57798587-b77km", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic71a016879e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.092 [INFO][6263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.092 [INFO][6263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" iface="eth0" netns="" May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.092 [INFO][6263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.092 [INFO][6263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.118 [INFO][6270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" HandleID="k8s-pod-network.61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.118 [INFO][6270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.118 [INFO][6270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.125 [WARNING][6270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" HandleID="k8s-pod-network.61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.125 [INFO][6270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" HandleID="k8s-pod-network.61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.126 [INFO][6270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:46.130652 containerd[1974]: 2025-05-17 00:26:46.128 [INFO][6263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:46.135276 containerd[1974]: time="2025-05-17T00:26:46.131867784Z" level=info msg="TearDown network for sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\" successfully" May 17 00:26:46.135276 containerd[1974]: time="2025-05-17T00:26:46.131974649Z" level=info msg="StopPodSandbox for \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\" returns successfully" May 17 00:26:46.135276 containerd[1974]: time="2025-05-17T00:26:46.133324482Z" level=info msg="RemovePodSandbox for \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\"" May 17 00:26:46.135276 containerd[1974]: time="2025-05-17T00:26:46.133348487Z" level=info msg="Forcibly stopping sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\"" May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.172 [WARNING][6284] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0", GenerateName:"calico-apiserver-5f57798587-", Namespace:"calico-apiserver", SelfLink:"", UID:"a481fc84-c654-4b7a-8053-527437371f0f", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f57798587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"5edd0cd1908ace2ba7c9b8f2c4263cd10e36210374829de75ee4faf607f40b07", Pod:"calico-apiserver-5f57798587-b77km", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic71a016879e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.173 [INFO][6284] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.173 [INFO][6284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" iface="eth0" netns="" May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.173 [INFO][6284] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.173 [INFO][6284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.207 [INFO][6292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" HandleID="k8s-pod-network.61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.207 [INFO][6292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.207 [INFO][6292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.215 [WARNING][6292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" HandleID="k8s-pod-network.61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.215 [INFO][6292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" HandleID="k8s-pod-network.61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" Workload="ip--172--31--23--228-k8s-calico--apiserver--5f57798587--b77km-eth0" May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.218 [INFO][6292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:46.223536 containerd[1974]: 2025-05-17 00:26:46.221 [INFO][6284] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d" May 17 00:26:46.225315 containerd[1974]: time="2025-05-17T00:26:46.223864600Z" level=info msg="TearDown network for sandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\" successfully" May 17 00:26:46.242216 containerd[1974]: time="2025-05-17T00:26:46.241294757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:46.242216 containerd[1974]: time="2025-05-17T00:26:46.241373435Z" level=info msg="RemovePodSandbox \"61eb9bd346c4e27ff07d9fed0491bd820530c034920f0c73cfb61789b025904d\" returns successfully" May 17 00:26:46.245477 containerd[1974]: time="2025-05-17T00:26:46.242808538Z" level=info msg="StopPodSandbox for \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\"" May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.292 [WARNING][6306] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9832b571-3650-40cf-9d5c-47e3967ad978", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20", Pod:"coredns-7c65d6cfc9-jsbnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a2da7e438b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.292 [INFO][6306] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.292 [INFO][6306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" iface="eth0" netns="" May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.292 [INFO][6306] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.292 [INFO][6306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.318 [INFO][6314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" HandleID="k8s-pod-network.cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.318 [INFO][6314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.318 [INFO][6314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.326 [WARNING][6314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" HandleID="k8s-pod-network.cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.326 [INFO][6314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" HandleID="k8s-pod-network.cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.328 [INFO][6314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:46.332464 containerd[1974]: 2025-05-17 00:26:46.330 [INFO][6306] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:46.333646 containerd[1974]: time="2025-05-17T00:26:46.333286321Z" level=info msg="TearDown network for sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\" successfully" May 17 00:26:46.333646 containerd[1974]: time="2025-05-17T00:26:46.333313336Z" level=info msg="StopPodSandbox for \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\" returns successfully" May 17 00:26:46.334006 containerd[1974]: time="2025-05-17T00:26:46.333978601Z" level=info msg="RemovePodSandbox for \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\"" May 17 00:26:46.334107 containerd[1974]: time="2025-05-17T00:26:46.334087537Z" level=info msg="Forcibly stopping sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\"" May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.373 [WARNING][6328] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9832b571-3650-40cf-9d5c-47e3967ad978", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"ecf523187133c4f0a1f2dc282b62d7da7367b57583c5b0ce8b9db3c346b1ac20", Pod:"coredns-7c65d6cfc9-jsbnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a2da7e438b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.373 [INFO][6328] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.373 [INFO][6328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" iface="eth0" netns="" May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.373 [INFO][6328] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.373 [INFO][6328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.410 [INFO][6335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" HandleID="k8s-pod-network.cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.410 [INFO][6335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.411 [INFO][6335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.419 [WARNING][6335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" HandleID="k8s-pod-network.cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.419 [INFO][6335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" HandleID="k8s-pod-network.cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--jsbnl-eth0" May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.422 [INFO][6335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:46.429408 containerd[1974]: 2025-05-17 00:26:46.425 [INFO][6328] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047" May 17 00:26:46.429408 containerd[1974]: time="2025-05-17T00:26:46.428471238Z" level=info msg="TearDown network for sandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\" successfully" May 17 00:26:46.457308 containerd[1974]: time="2025-05-17T00:26:46.456668058Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:46.457308 containerd[1974]: time="2025-05-17T00:26:46.456792328Z" level=info msg="RemovePodSandbox \"cecd358a20e27358527cefc23e3a77b7ac6ee9df8d4e6a0cead3be1006967047\" returns successfully" May 17 00:26:46.457624 containerd[1974]: time="2025-05-17T00:26:46.457597359Z" level=info msg="StopPodSandbox for \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\"" May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.507 [WARNING][6350] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1caf04b-d279-4556-9507-efceb97ef03e", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01", Pod:"csi-node-driver-4gndq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali292b362b30c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.507 [INFO][6350] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.507 [INFO][6350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" iface="eth0" netns="" May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.507 [INFO][6350] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.507 [INFO][6350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.538 [INFO][6358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" HandleID="k8s-pod-network.afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.538 [INFO][6358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.538 [INFO][6358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.544 [WARNING][6358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" HandleID="k8s-pod-network.afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.545 [INFO][6358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" HandleID="k8s-pod-network.afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.548 [INFO][6358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:46.552390 containerd[1974]: 2025-05-17 00:26:46.550 [INFO][6350] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:46.553214 containerd[1974]: time="2025-05-17T00:26:46.552464035Z" level=info msg="TearDown network for sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\" successfully" May 17 00:26:46.553214 containerd[1974]: time="2025-05-17T00:26:46.552494032Z" level=info msg="StopPodSandbox for \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\" returns successfully" May 17 00:26:46.553214 containerd[1974]: time="2025-05-17T00:26:46.552994614Z" level=info msg="RemovePodSandbox for \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\"" May 17 00:26:46.553214 containerd[1974]: time="2025-05-17T00:26:46.553027138Z" level=info msg="Forcibly stopping sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\"" May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.589 [WARNING][6372] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1caf04b-d279-4556-9507-efceb97ef03e", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"62166d67fc35bb63713aa4c46832acc3c31b40478ced9f0daf969885fcfb8b01", Pod:"csi-node-driver-4gndq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali292b362b30c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.592 [INFO][6372] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.592 [INFO][6372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" iface="eth0" netns="" May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.592 [INFO][6372] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.592 [INFO][6372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.617 [INFO][6379] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" HandleID="k8s-pod-network.afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.617 [INFO][6379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.617 [INFO][6379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.628 [WARNING][6379] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" HandleID="k8s-pod-network.afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.628 [INFO][6379] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" HandleID="k8s-pod-network.afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" Workload="ip--172--31--23--228-k8s-csi--node--driver--4gndq-eth0" May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.630 [INFO][6379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:46.633952 containerd[1974]: 2025-05-17 00:26:46.631 [INFO][6372] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67" May 17 00:26:46.633952 containerd[1974]: time="2025-05-17T00:26:46.633951147Z" level=info msg="TearDown network for sandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\" successfully" May 17 00:26:46.639972 containerd[1974]: time="2025-05-17T00:26:46.639801424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:46.639972 containerd[1974]: time="2025-05-17T00:26:46.639879014Z" level=info msg="RemovePodSandbox \"afa2385f11ac4968c6681d4def496ea8b991d790925f7c2e1e76624eef213e67\" returns successfully" May 17 00:26:46.640447 containerd[1974]: time="2025-05-17T00:26:46.640411062Z" level=info msg="StopPodSandbox for \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\"" May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.676 [WARNING][6393] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" WorkloadEndpoint="ip--172--31--23--228-k8s-whisker--79c6d7d9f5--8hp27-eth0" May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.676 [INFO][6393] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.676 [INFO][6393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" iface="eth0" netns="" May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.676 [INFO][6393] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.676 [INFO][6393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.702 [INFO][6400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" HandleID="k8s-pod-network.91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" Workload="ip--172--31--23--228-k8s-whisker--79c6d7d9f5--8hp27-eth0" May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.702 [INFO][6400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.702 [INFO][6400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.708 [WARNING][6400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" HandleID="k8s-pod-network.91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" Workload="ip--172--31--23--228-k8s-whisker--79c6d7d9f5--8hp27-eth0" May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.708 [INFO][6400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" HandleID="k8s-pod-network.91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" Workload="ip--172--31--23--228-k8s-whisker--79c6d7d9f5--8hp27-eth0" May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.710 [INFO][6400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:46.714529 containerd[1974]: 2025-05-17 00:26:46.712 [INFO][6393] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:46.714529 containerd[1974]: time="2025-05-17T00:26:46.714496817Z" level=info msg="TearDown network for sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\" successfully" May 17 00:26:46.714529 containerd[1974]: time="2025-05-17T00:26:46.714529108Z" level=info msg="StopPodSandbox for \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\" returns successfully" May 17 00:26:46.716254 containerd[1974]: time="2025-05-17T00:26:46.716136558Z" level=info msg="RemovePodSandbox for \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\"" May 17 00:26:46.716254 containerd[1974]: time="2025-05-17T00:26:46.716174360Z" level=info msg="Forcibly stopping sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\"" May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.752 [WARNING][6414] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" WorkloadEndpoint="ip--172--31--23--228-k8s-whisker--79c6d7d9f5--8hp27-eth0" May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.753 [INFO][6414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.753 [INFO][6414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" iface="eth0" netns="" May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.753 [INFO][6414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.753 [INFO][6414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.778 [INFO][6421] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" HandleID="k8s-pod-network.91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" Workload="ip--172--31--23--228-k8s-whisker--79c6d7d9f5--8hp27-eth0" May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.779 [INFO][6421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.779 [INFO][6421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.785 [WARNING][6421] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" HandleID="k8s-pod-network.91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" Workload="ip--172--31--23--228-k8s-whisker--79c6d7d9f5--8hp27-eth0" May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.785 [INFO][6421] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" HandleID="k8s-pod-network.91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" Workload="ip--172--31--23--228-k8s-whisker--79c6d7d9f5--8hp27-eth0" May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.787 [INFO][6421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:46.791515 containerd[1974]: 2025-05-17 00:26:46.789 [INFO][6414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568" May 17 00:26:46.793780 containerd[1974]: time="2025-05-17T00:26:46.791566768Z" level=info msg="TearDown network for sandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\" successfully" May 17 00:26:46.798643 containerd[1974]: time="2025-05-17T00:26:46.798578052Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:46.798746 containerd[1974]: time="2025-05-17T00:26:46.798653719Z" level=info msg="RemovePodSandbox \"91842645e29116cc3e5d2a7956f57bed54fe09f0ca81663c666430783faed568\" returns successfully" May 17 00:26:46.799147 containerd[1974]: time="2025-05-17T00:26:46.799115776Z" level=info msg="StopPodSandbox for \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\"" May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.838 [WARNING][6435] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf", Pod:"coredns-7c65d6cfc9-5sswv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9698d724ce9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.838 [INFO][6435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.839 [INFO][6435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" iface="eth0" netns="" May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.839 [INFO][6435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.839 [INFO][6435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.867 [INFO][6443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" HandleID="k8s-pod-network.a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.867 [INFO][6443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.867 [INFO][6443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.873 [WARNING][6443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" HandleID="k8s-pod-network.a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.873 [INFO][6443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" HandleID="k8s-pod-network.a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.875 [INFO][6443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:46.879741 containerd[1974]: 2025-05-17 00:26:46.877 [INFO][6435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:46.882157 containerd[1974]: time="2025-05-17T00:26:46.879790604Z" level=info msg="TearDown network for sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\" successfully" May 17 00:26:46.882157 containerd[1974]: time="2025-05-17T00:26:46.879819762Z" level=info msg="StopPodSandbox for \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\" returns successfully" May 17 00:26:46.882157 containerd[1974]: time="2025-05-17T00:26:46.880345212Z" level=info msg="RemovePodSandbox for \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\"" May 17 00:26:46.882157 containerd[1974]: time="2025-05-17T00:26:46.880376132Z" level=info msg="Forcibly stopping sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\"" May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.926 [WARNING][6457] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7dae1d2e-61e6-48f7-aad5-2ce4e3746c6c", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 25, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-228", ContainerID:"8bb4050d49c88f90713fb91fb557587cb86da9e061f976b4903c4923bd25e4bf", Pod:"coredns-7c65d6cfc9-5sswv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9698d724ce9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.926 [INFO][6457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.926 [INFO][6457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" iface="eth0" netns="" May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.926 [INFO][6457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.926 [INFO][6457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.951 [INFO][6464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" HandleID="k8s-pod-network.a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.951 [INFO][6464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.951 [INFO][6464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.958 [WARNING][6464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" HandleID="k8s-pod-network.a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.958 [INFO][6464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" HandleID="k8s-pod-network.a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" Workload="ip--172--31--23--228-k8s-coredns--7c65d6cfc9--5sswv-eth0" May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.959 [INFO][6464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:26:46.963735 containerd[1974]: 2025-05-17 00:26:46.961 [INFO][6457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9" May 17 00:26:46.967797 containerd[1974]: time="2025-05-17T00:26:46.963777040Z" level=info msg="TearDown network for sandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\" successfully" May 17 00:26:46.971518 containerd[1974]: time="2025-05-17T00:26:46.971472327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:26:46.971638 containerd[1974]: time="2025-05-17T00:26:46.971551104Z" level=info msg="RemovePodSandbox \"a5c1b9316322ec1327e0406dec9081a731ece4a861c31818732be1e492b92ed9\" returns successfully" May 17 00:26:48.006873 systemd[1]: Started sshd@14-172.31.23.228:22-147.75.109.163:48834.service - OpenSSH per-connection server daemon (147.75.109.163:48834). May 17 00:26:48.271683 sshd[6476]: Accepted publickey for core from 147.75.109.163 port 48834 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:48.276011 sshd[6476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:48.282492 systemd-logind[1955]: New session 15 of user core. May 17 00:26:48.286604 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:26:49.287842 sshd[6476]: pam_unix(sshd:session): session closed for user core May 17 00:26:49.293410 systemd-logind[1955]: Session 15 logged out. Waiting for processes to exit. May 17 00:26:49.294498 systemd[1]: sshd@14-172.31.23.228:22-147.75.109.163:48834.service: Deactivated successfully. May 17 00:26:49.299226 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:26:49.301125 systemd-logind[1955]: Removed session 15. May 17 00:26:49.317550 systemd[1]: Started sshd@15-172.31.23.228:22-147.75.109.163:45456.service - OpenSSH per-connection server daemon (147.75.109.163:45456). May 17 00:26:49.500010 sshd[6489]: Accepted publickey for core from 147.75.109.163 port 45456 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:49.501921 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:49.508464 systemd-logind[1955]: New session 16 of user core. May 17 00:26:49.511597 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:26:50.211975 sshd[6489]: pam_unix(sshd:session): session closed for user core May 17 00:26:50.215885 systemd-logind[1955]: Session 16 logged out. Waiting for processes to exit. May 17 00:26:50.217152 systemd[1]: sshd@15-172.31.23.228:22-147.75.109.163:45456.service: Deactivated successfully. May 17 00:26:50.219306 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:26:50.220303 systemd-logind[1955]: Removed session 16. May 17 00:26:50.239511 systemd[1]: Started sshd@16-172.31.23.228:22-147.75.109.163:45460.service - OpenSSH per-connection server daemon (147.75.109.163:45460). May 17 00:26:50.459495 sshd[6506]: Accepted publickey for core from 147.75.109.163 port 45460 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:50.463834 sshd[6506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:50.471891 systemd-logind[1955]: New session 17 of user core. May 17 00:26:50.474617 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:26:52.688632 containerd[1974]: time="2025-05-17T00:26:52.687285267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:26:53.283952 containerd[1974]: time="2025-05-17T00:26:53.283905793Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:53.294441 containerd[1974]: time="2025-05-17T00:26:53.286285012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:26:53.301742 containerd[1974]: time="2025-05-17T00:26:53.301686135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:53.406796 kubelet[3186]: E0517 00:26:53.390544 3186 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:53.424870 kubelet[3186]: E0517 00:26:53.424791 3186 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:26:53.543212 kubelet[3186]: E0517 00:26:53.543058 3186 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-296m4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-ffn6m_calico-system(6c6feb10-5e10-4718-ae2e-34e0ec7b697f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:53.557118 kubelet[3186]: E0517 00:26:53.556686 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-ffn6m" podUID="6c6feb10-5e10-4718-ae2e-34e0ec7b697f" May 17 00:26:53.800546 sshd[6506]: pam_unix(sshd:session): session closed for user core May 17 00:26:53.855289 systemd[1]: sshd@16-172.31.23.228:22-147.75.109.163:45460.service: Deactivated successfully. May 17 00:26:53.859240 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:26:53.861691 systemd-logind[1955]: Session 17 logged out. Waiting for processes to exit. May 17 00:26:53.873857 systemd[1]: Started sshd@17-172.31.23.228:22-147.75.109.163:45470.service - OpenSSH per-connection server daemon (147.75.109.163:45470). May 17 00:26:53.886630 systemd-logind[1955]: Removed session 17. May 17 00:26:54.123925 sshd[6535]: Accepted publickey for core from 147.75.109.163 port 45470 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:54.129073 sshd[6535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:54.137598 systemd-logind[1955]: New session 18 of user core. May 17 00:26:54.145626 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:26:55.229480 sshd[6535]: pam_unix(sshd:session): session closed for user core May 17 00:26:55.235260 systemd-logind[1955]: Session 18 logged out. Waiting for processes to exit. May 17 00:26:55.235543 systemd[1]: sshd@17-172.31.23.228:22-147.75.109.163:45470.service: Deactivated successfully. May 17 00:26:55.239494 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:26:55.240540 systemd-logind[1955]: Removed session 18. May 17 00:26:55.266010 systemd[1]: Started sshd@18-172.31.23.228:22-147.75.109.163:45484.service - OpenSSH per-connection server daemon (147.75.109.163:45484). May 17 00:26:55.476037 sshd[6549]: Accepted publickey for core from 147.75.109.163 port 45484 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:26:55.478451 sshd[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:26:55.483381 systemd-logind[1955]: New session 19 of user core. May 17 00:26:55.485806 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:26:55.741209 sshd[6549]: pam_unix(sshd:session): session closed for user core May 17 00:26:55.746346 systemd[1]: sshd@18-172.31.23.228:22-147.75.109.163:45484.service: Deactivated successfully. May 17 00:26:55.749394 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:26:55.752080 systemd-logind[1955]: Session 19 logged out. Waiting for processes to exit. May 17 00:26:55.753890 systemd-logind[1955]: Removed session 19. May 17 00:26:56.805438 kubelet[3186]: E0517 00:26:56.805367 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-78df879455-m7stx" podUID="35be9fdc-2ef5-4d7b-b281-9e429560f362" May 17 00:27:00.778531 systemd[1]: Started sshd@19-172.31.23.228:22-147.75.109.163:36896.service - OpenSSH per-connection server daemon (147.75.109.163:36896). May 17 00:27:01.083507 sshd[6586]: Accepted publickey for core from 147.75.109.163 port 36896 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:27:01.089102 sshd[6586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:27:01.095543 systemd-logind[1955]: New session 20 of user core. May 17 00:27:01.104698 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:27:02.706140 sshd[6586]: pam_unix(sshd:session): session closed for user core May 17 00:27:02.712635 systemd[1]: sshd@19-172.31.23.228:22-147.75.109.163:36896.service: Deactivated successfully. May 17 00:27:02.712939 systemd-logind[1955]: Session 20 logged out. Waiting for processes to exit. May 17 00:27:02.717109 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:27:02.721451 systemd-logind[1955]: Removed session 20. May 17 00:27:05.847458 kubelet[3186]: E0517 00:27:05.847213 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-ffn6m" podUID="6c6feb10-5e10-4718-ae2e-34e0ec7b697f" May 17 00:27:07.764867 systemd[1]: Started sshd@20-172.31.23.228:22-147.75.109.163:36898.service - OpenSSH per-connection server daemon (147.75.109.163:36898). May 17 00:27:07.980594 kubelet[3186]: I0517 00:27:07.944626 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:27:08.141270 sshd[6602]: Accepted publickey for core from 147.75.109.163 port 36898 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:27:08.146555 sshd[6602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:27:08.169459 systemd-logind[1955]: New session 21 of user core. May 17 00:27:08.175621 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:27:09.066375 containerd[1974]: time="2025-05-17T00:27:08.983047683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:27:09.551712 sshd[6602]: pam_unix(sshd:session): session closed for user core May 17 00:27:09.563331 systemd-logind[1955]: Session 21 logged out. Waiting for processes to exit. May 17 00:27:09.565736 systemd[1]: sshd@20-172.31.23.228:22-147.75.109.163:36898.service: Deactivated successfully. May 17 00:27:09.572796 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:27:09.578035 systemd-logind[1955]: Removed session 21. May 17 00:27:09.742260 containerd[1974]: time="2025-05-17T00:27:09.742052254Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:27:09.751098 containerd[1974]: time="2025-05-17T00:27:09.744356278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:27:09.754199 containerd[1974]: time="2025-05-17T00:27:09.754117000Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:27:09.788433 kubelet[3186]: E0517 00:27:09.773462 3186 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:27:09.794457 kubelet[3186]: E0517 00:27:09.789157 3186 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:27:09.927723 kubelet[3186]: E0517 00:27:09.926375 3186 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f414e32a0f9e4b79b1ec41579d3d8398,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8dbcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78df879455-m7stx_calico-system(35be9fdc-2ef5-4d7b-b281-9e429560f362): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:27:09.931408 containerd[1974]: time="2025-05-17T00:27:09.930851863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:27:10.128011 containerd[1974]: time="2025-05-17T00:27:10.127952950Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:27:10.130238 containerd[1974]: time="2025-05-17T00:27:10.130181614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:27:10.130382 containerd[1974]: time="2025-05-17T00:27:10.130293302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:27:10.130539 kubelet[3186]: E0517 00:27:10.130498 3186 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:27:10.131638 kubelet[3186]: E0517 00:27:10.130550 3186 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:27:10.131638 kubelet[3186]: E0517 00:27:10.130679 3186 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dbcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78df879455-m7stx_calico-system(35be9fdc-2ef5-4d7b-b281-9e429560f362): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:27:10.146319 kubelet[3186]: E0517 00:27:10.146213 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-78df879455-m7stx" podUID="35be9fdc-2ef5-4d7b-b281-9e429560f362" May 17 00:27:14.608987 systemd[1]: Started sshd@21-172.31.23.228:22-147.75.109.163:46644.service - OpenSSH per-connection server daemon (147.75.109.163:46644). May 17 00:27:14.859551 sshd[6626]: Accepted publickey for core from 147.75.109.163 port 46644 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:27:14.862650 sshd[6626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:27:14.875047 systemd-logind[1955]: New session 22 of user core. May 17 00:27:14.883652 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:27:15.865678 sshd[6626]: pam_unix(sshd:session): session closed for user core May 17 00:27:15.881662 systemd[1]: sshd@21-172.31.23.228:22-147.75.109.163:46644.service: Deactivated successfully. May 17 00:27:15.887106 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:27:15.895698 systemd-logind[1955]: Session 22 logged out. Waiting for processes to exit. May 17 00:27:15.899066 systemd-logind[1955]: Removed session 22. May 17 00:27:18.854692 containerd[1974]: time="2025-05-17T00:27:18.854638615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:27:19.083705 containerd[1974]: time="2025-05-17T00:27:19.083646496Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:27:19.086008 containerd[1974]: time="2025-05-17T00:27:19.085953179Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:27:19.086143 containerd[1974]: time="2025-05-17T00:27:19.085977436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:27:19.086345 kubelet[3186]: E0517 00:27:19.086297 3186 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:27:19.086843 kubelet[3186]: E0517 00:27:19.086363 3186 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:27:19.092513 kubelet[3186]: E0517 00:27:19.092407 3186 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-296m4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-ffn6m_calico-system(6c6feb10-5e10-4718-ae2e-34e0ec7b697f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:27:19.093686 kubelet[3186]: E0517 00:27:19.093640 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-ffn6m" podUID="6c6feb10-5e10-4718-ae2e-34e0ec7b697f" May 17 00:27:20.899866 systemd[1]: Started sshd@22-172.31.23.228:22-147.75.109.163:55270.service - OpenSSH per-connection server daemon (147.75.109.163:55270). May 17 00:27:21.206459 sshd[6660]: Accepted publickey for core from 147.75.109.163 port 55270 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:27:21.212119 sshd[6660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:27:21.220553 systemd-logind[1955]: New session 23 of user core. May 17 00:27:21.223610 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:27:22.262335 sshd[6660]: pam_unix(sshd:session): session closed for user core May 17 00:27:22.269681 systemd-logind[1955]: Session 23 logged out. Waiting for processes to exit. May 17 00:27:22.271709 systemd[1]: sshd@22-172.31.23.228:22-147.75.109.163:55270.service: Deactivated successfully. May 17 00:27:22.277036 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:27:22.278968 systemd-logind[1955]: Removed session 23. May 17 00:27:24.835191 kubelet[3186]: E0517 00:27:24.835091 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-78df879455-m7stx" podUID="35be9fdc-2ef5-4d7b-b281-9e429560f362" May 17 00:27:27.306739 systemd[1]: Started sshd@23-172.31.23.228:22-147.75.109.163:55280.service - OpenSSH per-connection server daemon (147.75.109.163:55280). May 17 00:27:27.511383 sshd[6675]: Accepted publickey for core from 147.75.109.163 port 55280 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:27:27.514346 sshd[6675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:27:27.525230 systemd-logind[1955]: New session 24 of user core. May 17 00:27:27.529785 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:27:27.663617 systemd[1]: run-containerd-runc-k8s.io-ac42a82a61d2f910cba8531945cbd4cc55f9544dd00afd3ee66eb57d2755662b-runc.xyC50k.mount: Deactivated successfully. May 17 00:27:29.294947 sshd[6675]: pam_unix(sshd:session): session closed for user core May 17 00:27:29.330508 systemd[1]: sshd@23-172.31.23.228:22-147.75.109.163:55280.service: Deactivated successfully. May 17 00:27:29.335148 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:27:29.336602 systemd-logind[1955]: Session 24 logged out. Waiting for processes to exit. May 17 00:27:29.340454 systemd-logind[1955]: Removed session 24. May 17 00:27:30.851033 kubelet[3186]: E0517 00:27:30.850754 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-ffn6m" podUID="6c6feb10-5e10-4718-ae2e-34e0ec7b697f" May 17 00:27:34.355822 systemd[1]: Started sshd@24-172.31.23.228:22-147.75.109.163:40764.service - OpenSSH per-connection server daemon (147.75.109.163:40764). May 17 00:27:34.620266 sshd[6707]: Accepted publickey for core from 147.75.109.163 port 40764 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:27:34.623689 sshd[6707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:27:34.631864 systemd-logind[1955]: New session 25 of user core. May 17 00:27:34.636667 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:27:35.378438 sshd[6707]: pam_unix(sshd:session): session closed for user core May 17 00:27:35.384912 systemd[1]: sshd@24-172.31.23.228:22-147.75.109.163:40764.service: Deactivated successfully. May 17 00:27:35.385657 systemd-logind[1955]: Session 25 logged out. Waiting for processes to exit. May 17 00:27:35.390292 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:27:35.394704 systemd-logind[1955]: Removed session 25. May 17 00:27:35.798954 kubelet[3186]: E0517 00:27:35.798743 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-78df879455-m7stx" podUID="35be9fdc-2ef5-4d7b-b281-9e429560f362" May 17 00:27:45.796112 kubelet[3186]: E0517 00:27:45.795292 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-ffn6m" podUID="6c6feb10-5e10-4718-ae2e-34e0ec7b697f" May 17 00:27:49.855501 containerd[1974]: time="2025-05-17T00:27:49.827727056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:27:50.220520 containerd[1974]: time="2025-05-17T00:27:50.220369583Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:27:50.222537 containerd[1974]: time="2025-05-17T00:27:50.222492179Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:27:50.222836 containerd[1974]: time="2025-05-17T00:27:50.222601994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:27:50.234770 kubelet[3186]: E0517 00:27:50.234000 3186 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:27:50.243805 kubelet[3186]: E0517 00:27:50.243741 3186 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:27:50.318083 kubelet[3186]: E0517 00:27:50.318016 3186 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f414e32a0f9e4b79b1ec41579d3d8398,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8dbcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78df879455-m7stx_calico-system(35be9fdc-2ef5-4d7b-b281-9e429560f362): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:27:50.320310 containerd[1974]: time="2025-05-17T00:27:50.320271215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:27:50.538287 containerd[1974]: time="2025-05-17T00:27:50.538159642Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:27:50.540455 containerd[1974]: time="2025-05-17T00:27:50.540324496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:27:50.540455 containerd[1974]: time="2025-05-17T00:27:50.540361894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:27:50.541274 kubelet[3186]: E0517 00:27:50.540556 3186 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:27:50.541274 kubelet[3186]: E0517 00:27:50.540603 3186 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:27:50.541274 kubelet[3186]: E0517 00:27:50.540713 3186 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dbcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78df879455-m7stx_calico-system(35be9fdc-2ef5-4d7b-b281-9e429560f362): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:27:50.542002 kubelet[3186]: E0517 00:27:50.541919 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-78df879455-m7stx" podUID="35be9fdc-2ef5-4d7b-b281-9e429560f362" May 17 00:27:52.522906 systemd[1]: cri-containerd-55a075ed4342b8f089ea420e3c945036935bb00ec94429441eb8b9ee6f5b47ac.scope: Deactivated successfully. May 17 00:27:52.523728 systemd[1]: cri-containerd-55a075ed4342b8f089ea420e3c945036935bb00ec94429441eb8b9ee6f5b47ac.scope: Consumed 4.196s CPU time, 23.8M memory peak, 0B memory swap peak. May 17 00:27:52.652937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55a075ed4342b8f089ea420e3c945036935bb00ec94429441eb8b9ee6f5b47ac-rootfs.mount: Deactivated successfully. May 17 00:27:52.677906 containerd[1974]: time="2025-05-17T00:27:52.677822627Z" level=info msg="shim disconnected" id=55a075ed4342b8f089ea420e3c945036935bb00ec94429441eb8b9ee6f5b47ac namespace=k8s.io May 17 00:27:52.677906 containerd[1974]: time="2025-05-17T00:27:52.677900428Z" level=warning msg="cleaning up after shim disconnected" id=55a075ed4342b8f089ea420e3c945036935bb00ec94429441eb8b9ee6f5b47ac namespace=k8s.io May 17 00:27:52.677906 containerd[1974]: time="2025-05-17T00:27:52.677911328Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:27:52.723730 systemd[1]: cri-containerd-1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972.scope: Deactivated successfully. May 17 00:27:52.724051 systemd[1]: cri-containerd-1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972.scope: Consumed 12.155s CPU time. May 17 00:27:52.769477 containerd[1974]: time="2025-05-17T00:27:52.757123714Z" level=info msg="shim disconnected" id=1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972 namespace=k8s.io May 17 00:27:52.769477 containerd[1974]: time="2025-05-17T00:27:52.757206716Z" level=warning msg="cleaning up after shim disconnected" id=1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972 namespace=k8s.io May 17 00:27:52.769477 containerd[1974]: time="2025-05-17T00:27:52.757234760Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:27:52.767382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972-rootfs.mount: Deactivated successfully. May 17 00:27:53.823711 kubelet[3186]: I0517 00:27:53.823644 3186 scope.go:117] "RemoveContainer" containerID="072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b" May 17 00:27:53.830106 kubelet[3186]: I0517 00:27:53.830061 3186 scope.go:117] "RemoveContainer" containerID="55a075ed4342b8f089ea420e3c945036935bb00ec94429441eb8b9ee6f5b47ac" May 17 00:27:53.830441 kubelet[3186]: I0517 00:27:53.830411 3186 scope.go:117] "RemoveContainer" containerID="1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972" May 17 00:27:53.836342 kubelet[3186]: E0517 00:27:53.836285 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7c5755cdcb-jwnrk_tigera-operator(902d38c4-38cf-488d-bd11-2abf675574c6)\"" pod="tigera-operator/tigera-operator-7c5755cdcb-jwnrk" podUID="902d38c4-38cf-488d-bd11-2abf675574c6" May 17 00:27:53.876929 containerd[1974]: time="2025-05-17T00:27:53.876216284Z" level=info msg="CreateContainer within sandbox \"1298a3a140b59d78ce99026de9bf6be1a890620dac404ac53b6738fda66315f8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:27:53.941801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158766122.mount: Deactivated successfully. May 17 00:27:53.948139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011477596.mount: Deactivated successfully. May 17 00:27:53.958602 containerd[1974]: time="2025-05-17T00:27:53.958535505Z" level=info msg="CreateContainer within sandbox \"1298a3a140b59d78ce99026de9bf6be1a890620dac404ac53b6738fda66315f8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2df174543b8c7102d9e0cfc169a0bc4c8871a2ceae4acf01c046aa8c5b221c8d\"" May 17 00:27:53.958602 containerd[1974]: time="2025-05-17T00:27:53.959004879Z" level=info msg="StartContainer for \"2df174543b8c7102d9e0cfc169a0bc4c8871a2ceae4acf01c046aa8c5b221c8d\"" May 17 00:27:54.020659 systemd[1]: Started cri-containerd-2df174543b8c7102d9e0cfc169a0bc4c8871a2ceae4acf01c046aa8c5b221c8d.scope - libcontainer container 2df174543b8c7102d9e0cfc169a0bc4c8871a2ceae4acf01c046aa8c5b221c8d. May 17 00:27:54.022704 containerd[1974]: time="2025-05-17T00:27:54.022660979Z" level=info msg="RemoveContainer for \"072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b\"" May 17 00:27:54.034038 containerd[1974]: time="2025-05-17T00:27:54.033920909Z" level=info msg="RemoveContainer for \"072019e2818e4b34c62db597f1a315384853518ab6ea96e9a5c4526eb808718b\" returns successfully" May 17 00:27:54.083492 containerd[1974]: time="2025-05-17T00:27:54.083372611Z" level=info msg="StartContainer for \"2df174543b8c7102d9e0cfc169a0bc4c8871a2ceae4acf01c046aa8c5b221c8d\" returns successfully" May 17 00:27:54.930410 systemd[1]: run-containerd-runc-k8s.io-2df174543b8c7102d9e0cfc169a0bc4c8871a2ceae4acf01c046aa8c5b221c8d-runc.CYxfeH.mount: Deactivated successfully. May 17 00:27:56.549518 systemd[1]: cri-containerd-93137452b0ff0fdc42495b478ed0d47fb5b3b795dea21ad1f06df9e5b1681e42.scope: Deactivated successfully. May 17 00:27:56.550059 systemd[1]: cri-containerd-93137452b0ff0fdc42495b478ed0d47fb5b3b795dea21ad1f06df9e5b1681e42.scope: Consumed 1.667s CPU time, 20.6M memory peak, 0B memory swap peak. May 17 00:27:56.579967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93137452b0ff0fdc42495b478ed0d47fb5b3b795dea21ad1f06df9e5b1681e42-rootfs.mount: Deactivated successfully. May 17 00:27:56.580961 containerd[1974]: time="2025-05-17T00:27:56.580606327Z" level=info msg="shim disconnected" id=93137452b0ff0fdc42495b478ed0d47fb5b3b795dea21ad1f06df9e5b1681e42 namespace=k8s.io May 17 00:27:56.580961 containerd[1974]: time="2025-05-17T00:27:56.580654095Z" level=warning msg="cleaning up after shim disconnected" id=93137452b0ff0fdc42495b478ed0d47fb5b3b795dea21ad1f06df9e5b1681e42 namespace=k8s.io May 17 00:27:56.580961 containerd[1974]: time="2025-05-17T00:27:56.580662175Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:27:56.610404 kubelet[3186]: E0517 00:27:56.608868 3186 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-228?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 17 00:27:56.800645 kubelet[3186]: E0517 00:27:56.800527 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-ffn6m" podUID="6c6feb10-5e10-4718-ae2e-34e0ec7b697f" May 17 00:27:56.822792 kubelet[3186]: I0517 00:27:56.822751 3186 scope.go:117] "RemoveContainer" containerID="93137452b0ff0fdc42495b478ed0d47fb5b3b795dea21ad1f06df9e5b1681e42" May 17 00:27:56.830115 containerd[1974]: time="2025-05-17T00:27:56.830072119Z" level=info msg="CreateContainer within sandbox \"10dbb15031ee9db1ec4394fc4e0c2415afc61fc358f2e7d222a09a904501676a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:27:56.851324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3129016433.mount: Deactivated successfully. May 17 00:27:56.854771 containerd[1974]: time="2025-05-17T00:27:56.854726920Z" level=info msg="CreateContainer within sandbox \"10dbb15031ee9db1ec4394fc4e0c2415afc61fc358f2e7d222a09a904501676a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"eb665e5e03f8eb16e591677070202cf04938da520cba8f1fdb5105e21f2181ca\"" May 17 00:27:56.856279 containerd[1974]: time="2025-05-17T00:27:56.855262408Z" level=info msg="StartContainer for \"eb665e5e03f8eb16e591677070202cf04938da520cba8f1fdb5105e21f2181ca\"" May 17 00:27:56.895671 systemd[1]: Started cri-containerd-eb665e5e03f8eb16e591677070202cf04938da520cba8f1fdb5105e21f2181ca.scope - libcontainer container eb665e5e03f8eb16e591677070202cf04938da520cba8f1fdb5105e21f2181ca. May 17 00:27:56.942997 containerd[1974]: time="2025-05-17T00:27:56.942342263Z" level=info msg="StartContainer for \"eb665e5e03f8eb16e591677070202cf04938da520cba8f1fdb5105e21f2181ca\" returns successfully" May 17 00:27:57.593041 systemd[1]: run-containerd-runc-k8s.io-ac42a82a61d2f910cba8531945cbd4cc55f9544dd00afd3ee66eb57d2755662b-runc.skmRg4.mount: Deactivated successfully. May 17 00:28:03.816816 kubelet[3186]: E0517 00:28:03.816492 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-78df879455-m7stx" podUID="35be9fdc-2ef5-4d7b-b281-9e429560f362" May 17 00:28:05.795420 kubelet[3186]: I0517 00:28:05.794924 3186 scope.go:117] "RemoveContainer" containerID="1573bc436eb3c26dd10949aacd67728e966ceecdbd0ebc7ff2889265a0b11972" May 17 00:28:05.827324 containerd[1974]: time="2025-05-17T00:28:05.827282554Z" level=info msg="CreateContainer within sandbox \"91ff33bd9e4da250988c6d1a6719be933d357cffc738f971f10ffed086824703\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" May 17 00:28:05.849649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476486451.mount: Deactivated successfully. May 17 00:28:05.855648 containerd[1974]: time="2025-05-17T00:28:05.855523443Z" level=info msg="CreateContainer within sandbox \"91ff33bd9e4da250988c6d1a6719be933d357cffc738f971f10ffed086824703\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"6b212c86e93fe515f12b8bb36a91cbbd7bd21c7e3108bbad694982f43c470ef0\"" May 17 00:28:05.856132 containerd[1974]: time="2025-05-17T00:28:05.856105228Z" level=info msg="StartContainer for \"6b212c86e93fe515f12b8bb36a91cbbd7bd21c7e3108bbad694982f43c470ef0\"" May 17 00:28:05.908392 systemd[1]: Started cri-containerd-6b212c86e93fe515f12b8bb36a91cbbd7bd21c7e3108bbad694982f43c470ef0.scope - libcontainer container 6b212c86e93fe515f12b8bb36a91cbbd7bd21c7e3108bbad694982f43c470ef0. May 17 00:28:05.948335 containerd[1974]: time="2025-05-17T00:28:05.948267076Z" level=info msg="StartContainer for \"6b212c86e93fe515f12b8bb36a91cbbd7bd21c7e3108bbad694982f43c470ef0\" returns successfully" May 17 00:28:06.610984 kubelet[3186]: E0517 00:28:06.610909 3186 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-228?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"