Jul 7 00:00:33.956748 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 00:00:33.956790 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:00:33.956809 kernel: BIOS-provided physical RAM map: Jul 7 00:00:33.956821 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 00:00:33.956832 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 7 00:00:33.956843 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jul 7 00:00:33.956858 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jul 7 00:00:33.956869 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 7 00:00:33.956881 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 7 00:00:33.956897 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 7 00:00:33.956909 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 7 00:00:33.956920 kernel: NX (Execute Disable) protection: active Jul 7 00:00:33.956932 kernel: APIC: Static calls initialized Jul 7 00:00:33.956944 kernel: efi: EFI v2.7 by EDK II Jul 7 00:00:33.956960 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 7 00:00:33.956978 kernel: SMBIOS 2.7 present. Jul 7 00:00:33.956991 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 7 00:00:33.957004 kernel: Hypervisor detected: KVM Jul 7 00:00:33.957018 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:00:33.957031 kernel: kvm-clock: using sched offset of 3728112097 cycles Jul 7 00:00:33.957046 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:00:33.957061 kernel: tsc: Detected 2499.996 MHz processor Jul 7 00:00:33.957076 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:00:33.957092 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:00:33.957107 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 7 00:00:33.957125 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 00:00:33.957139 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:00:33.957153 kernel: Using GB pages for direct mapping Jul 7 00:00:33.957166 kernel: Secure boot disabled Jul 7 00:00:33.957179 kernel: ACPI: Early table checksum verification disabled Jul 7 00:00:33.957192 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 7 00:00:33.957205 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 7 00:00:33.957218 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 7 00:00:33.957231 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 7 00:00:33.957248 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 7 00:00:33.957261 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 7 00:00:33.957274 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 7 00:00:33.957287 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 7 00:00:33.957300 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 7 00:00:33.957314 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 7 00:00:33.957333 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 7 00:00:33.957350 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 7 00:00:33.957365 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 7 00:00:33.957379 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 7 00:00:33.957393 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 7 00:00:33.957408 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 7 00:00:33.957422 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 7 00:00:33.957440 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 7 00:00:33.957455 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 7 00:00:33.957470 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 7 00:00:33.957484 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 7 00:00:33.957498 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 7 00:00:33.957512 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 7 00:00:33.957526 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 7 00:00:33.957541 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 7 00:00:33.957555 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 7 00:00:33.957570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 7 00:00:33.957588 kernel: NUMA: Initialized distance table, cnt=1 Jul 7 00:00:33.957602 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Jul 7 00:00:33.957617 kernel: Zone ranges: Jul 7 00:00:33.957631 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:00:33.957645 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 7 00:00:33.957660 kernel: Normal empty Jul 7 00:00:33.957675 kernel: Movable zone start for each node Jul 7 00:00:33.957708 kernel: Early memory node ranges Jul 7 00:00:33.960014 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 00:00:33.960053 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 7 00:00:33.960068 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 7 00:00:33.960083 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 7 00:00:33.960099 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:00:33.960112 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 00:00:33.960128 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 7 00:00:33.960143 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 7 00:00:33.960156 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 7 00:00:33.960172 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:00:33.960190 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 7 00:00:33.960203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:00:33.960217 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:00:33.960232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:00:33.960246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:00:33.960260 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:00:33.960274 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:00:33.960289 kernel: TSC deadline timer available Jul 7 00:00:33.960303 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 7 00:00:33.960318 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:00:33.960335 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 7 00:00:33.960350 kernel: Booting paravirtualized kernel on KVM Jul 7 00:00:33.960363 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:00:33.960377 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 00:00:33.960392 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 7 00:00:33.960406 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 7 00:00:33.960420 kernel: pcpu-alloc: [0] 0 1 Jul 7 00:00:33.960435 kernel: kvm-guest: PV spinlocks enabled Jul 7 00:00:33.960451 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 00:00:33.960472 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:00:33.960488 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:00:33.960501 kernel: random: crng init done Jul 7 00:00:33.960514 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:00:33.960529 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 00:00:33.960544 kernel: Fallback order for Node 0: 0 Jul 7 00:00:33.960558 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jul 7 00:00:33.960576 kernel: Policy zone: DMA32 Jul 7 00:00:33.960591 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:00:33.960606 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 162936K reserved, 0K cma-reserved) Jul 7 00:00:33.960621 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:00:33.960637 kernel: Kernel/User page tables isolation: enabled Jul 7 00:00:33.960650 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 00:00:33.960664 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 00:00:33.960678 kernel: Dynamic Preempt: voluntary Jul 7 00:00:33.960718 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:00:33.960737 kernel: rcu: RCU event tracing is enabled. Jul 7 00:00:33.960752 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:00:33.960767 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:00:33.960781 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:00:33.960797 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:00:33.960813 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:00:33.960829 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:00:33.960845 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 00:00:33.960873 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:00:33.960886 kernel: Console: colour dummy device 80x25 Jul 7 00:00:33.960901 kernel: printk: console [tty0] enabled Jul 7 00:00:33.960915 kernel: printk: console [ttyS0] enabled Jul 7 00:00:33.960932 kernel: ACPI: Core revision 20230628 Jul 7 00:00:33.960947 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 7 00:00:33.960961 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:00:33.960975 kernel: x2apic enabled Jul 7 00:00:33.960990 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:00:33.961005 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 7 00:00:33.961022 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jul 7 00:00:33.961037 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 7 00:00:33.961052 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 7 00:00:33.961066 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:00:33.961080 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:00:33.961093 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:00:33.961107 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 7 00:00:33.961122 kernel: RETBleed: Vulnerable Jul 7 00:00:33.961136 kernel: Speculative Store Bypass: Vulnerable Jul 7 00:00:33.961153 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 00:00:33.961168 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 00:00:33.961186 kernel: GDS: Unknown: Dependent on hypervisor status Jul 7 00:00:33.961199 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 00:00:33.961213 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:00:33.961227 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:00:33.961244 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:00:33.961261 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 7 00:00:33.961277 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 7 00:00:33.961293 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 7 00:00:33.961310 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 7 00:00:33.961329 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 7 00:00:33.961344 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 7 00:00:33.961359 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:00:33.961375 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 7 00:00:33.961391 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 7 00:00:33.961407 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 7 00:00:33.961424 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 7 00:00:33.961440 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 7 00:00:33.961457 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 7 00:00:33.961474 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 7 00:00:33.961490 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:00:33.961510 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:00:33.961527 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 00:00:33.961544 kernel: landlock: Up and running. Jul 7 00:00:33.961560 kernel: SELinux: Initializing. Jul 7 00:00:33.961576 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:00:33.961591 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:00:33.961608 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 7 00:00:33.961624 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:00:33.961641 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:00:33.961659 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:00:33.961676 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 7 00:00:33.961725 kernel: signal: max sigframe size: 3632 Jul 7 00:00:33.961741 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:00:33.961758 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:00:33.961773 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 00:00:33.961788 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:00:33.961803 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:00:33.961818 kernel: .... node #0, CPUs: #1 Jul 7 00:00:33.961836 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 7 00:00:33.961853 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 7 00:00:33.961875 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:00:33.961891 kernel: smpboot: Max logical packages: 1 Jul 7 00:00:33.961907 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jul 7 00:00:33.961923 kernel: devtmpfs: initialized Jul 7 00:00:33.961938 kernel: x86/mm: Memory block size: 128MB Jul 7 00:00:33.961957 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 7 00:00:33.961979 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:00:33.961997 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:00:33.962015 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:00:33.962030 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:00:33.962044 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:00:33.962058 kernel: audit: type=2000 audit(1751846433.872:1): state=initialized audit_enabled=0 res=1 Jul 7 00:00:33.962073 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:00:33.962089 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:00:33.962103 kernel: cpuidle: using governor menu Jul 7 00:00:33.962119 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:00:33.962136 kernel: dca service started, version 1.12.1 Jul 7 00:00:33.962155 kernel: PCI: Using configuration type 1 for base access Jul 7 00:00:33.962169 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:00:33.962184 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:00:33.962199 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:00:33.962215 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:00:33.962231 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:00:33.962247 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:00:33.962260 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:00:33.962275 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:00:33.962296 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 7 00:00:33.962312 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 7 00:00:33.962328 kernel: ACPI: Interpreter enabled Jul 7 00:00:33.962343 kernel: ACPI: PM: (supports S0 S5) Jul 7 00:00:33.962358 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:00:33.962374 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:00:33.962390 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:00:33.962406 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 7 00:00:33.962421 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:00:33.963743 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:00:33.963974 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 00:00:33.964116 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 00:00:33.964138 kernel: acpiphp: Slot [3] registered Jul 7 00:00:33.964156 kernel: acpiphp: Slot [4] registered Jul 7 00:00:33.964173 kernel: acpiphp: Slot [5] registered Jul 7 00:00:33.964189 kernel: acpiphp: Slot [6] registered Jul 7 00:00:33.964205 kernel: acpiphp: Slot [7] registered Jul 7 00:00:33.964227 kernel: acpiphp: Slot [8] registered Jul 7 00:00:33.964243 kernel: acpiphp: Slot [9] registered Jul 7 00:00:33.964260 kernel: acpiphp: Slot [10] registered Jul 7 00:00:33.964277 kernel: acpiphp: Slot [11] registered Jul 7 00:00:33.964293 kernel: acpiphp: Slot [12] registered Jul 7 00:00:33.964309 kernel: acpiphp: Slot [13] registered Jul 7 00:00:33.964326 kernel: acpiphp: Slot [14] registered Jul 7 00:00:33.964342 kernel: acpiphp: Slot [15] registered Jul 7 00:00:33.964358 kernel: acpiphp: Slot [16] registered Jul 7 00:00:33.964378 kernel: acpiphp: Slot [17] registered Jul 7 00:00:33.964395 kernel: acpiphp: Slot [18] registered Jul 7 00:00:33.964411 kernel: acpiphp: Slot [19] registered Jul 7 00:00:33.964427 kernel: acpiphp: Slot [20] registered Jul 7 00:00:33.964444 kernel: acpiphp: Slot [21] registered Jul 7 00:00:33.964460 kernel: acpiphp: Slot [22] registered Jul 7 00:00:33.964476 kernel: acpiphp: Slot [23] registered Jul 7 00:00:33.964492 kernel: acpiphp: Slot [24] registered Jul 7 00:00:33.964509 kernel: acpiphp: Slot [25] registered Jul 7 00:00:33.964528 kernel: acpiphp: Slot [26] registered Jul 7 00:00:33.964545 kernel: acpiphp: Slot [27] registered Jul 7 00:00:33.964561 kernel: acpiphp: Slot [28] registered Jul 7 00:00:33.964577 kernel: acpiphp: Slot [29] registered Jul 7 00:00:33.964591 kernel: acpiphp: Slot [30] registered Jul 7 00:00:33.964608 kernel: acpiphp: Slot [31] registered Jul 7 00:00:33.964625 kernel: PCI host bridge to bus 0000:00 Jul 7 00:00:33.967355 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:00:33.967528 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:00:33.967674 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:00:33.967847 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 7 00:00:33.967968 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 7 00:00:33.968089 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:00:33.968249 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 7 00:00:33.968399 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 7 00:00:33.968553 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jul 7 00:00:33.968717 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 7 00:00:33.968881 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 7 00:00:33.969021 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 7 00:00:33.969159 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 7 00:00:33.969299 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 7 00:00:33.969441 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 7 00:00:33.969588 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 7 00:00:33.970893 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jul 7 00:00:33.971061 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jul 7 00:00:33.971211 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 7 00:00:33.971355 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jul 7 00:00:33.971523 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:00:33.973797 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 7 00:00:33.974065 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jul 7 00:00:33.974274 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 7 00:00:33.974464 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jul 7 00:00:33.974486 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:00:33.974505 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:00:33.974522 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:00:33.974540 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:00:33.974564 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 00:00:33.974582 kernel: iommu: Default domain type: Translated Jul 7 00:00:33.974600 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:00:33.974617 kernel: efivars: Registered efivars operations Jul 7 00:00:33.974634 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:00:33.974651 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:00:33.974668 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 7 00:00:33.974684 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 7 00:00:33.974881 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 7 00:00:33.975026 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 7 00:00:33.975167 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:00:33.975188 kernel: vgaarb: loaded Jul 7 00:00:33.975206 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 7 00:00:33.975223 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 7 00:00:33.975240 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:00:33.975257 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:00:33.975275 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:00:33.975295 kernel: pnp: PnP ACPI init Jul 7 00:00:33.975312 kernel: pnp: PnP ACPI: found 5 devices Jul 7 00:00:33.975326 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:00:33.975340 kernel: NET: Registered PF_INET protocol family Jul 7 00:00:33.975354 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:00:33.975368 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 00:00:33.975383 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:00:33.975398 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:00:33.975411 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:00:33.975431 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 00:00:33.975460 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:00:33.975479 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:00:33.975498 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:00:33.975517 kernel: NET: Registered PF_XDP protocol family Jul 7 00:00:33.975670 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:00:33.977420 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:00:33.977555 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:00:33.977681 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 7 00:00:33.977831 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 7 00:00:33.977981 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 00:00:33.978002 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:00:33.978019 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 00:00:33.978035 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 7 00:00:33.978051 kernel: clocksource: Switched to clocksource tsc Jul 7 00:00:33.978067 kernel: Initialise system trusted keyrings Jul 7 00:00:33.978083 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 00:00:33.978103 kernel: Key type asymmetric registered Jul 7 00:00:33.978118 kernel: Asymmetric key parser 'x509' registered Jul 7 00:00:33.978134 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 00:00:33.978150 kernel: io scheduler mq-deadline registered Jul 7 00:00:33.978166 kernel: io scheduler kyber registered Jul 7 00:00:33.978181 kernel: io scheduler bfq registered Jul 7 00:00:33.978197 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:00:33.978212 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:00:33.978228 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:00:33.978247 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:00:33.978263 kernel: i8042: Warning: Keylock active Jul 7 00:00:33.978278 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:00:33.978294 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:00:33.978442 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 7 00:00:33.978573 kernel: rtc_cmos 00:00: registered as rtc0 Jul 7 00:00:33.978716 kernel: rtc_cmos 00:00: setting system clock to 2025-07-07T00:00:33 UTC (1751846433) Jul 7 00:00:33.978847 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 7 00:00:33.978871 kernel: intel_pstate: CPU model not supported Jul 7 00:00:33.978886 kernel: efifb: probing for efifb Jul 7 00:00:33.978902 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jul 7 00:00:33.978918 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 7 00:00:33.978934 kernel: efifb: scrolling: redraw Jul 7 00:00:33.978950 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 00:00:33.978966 kernel: Console: switching to colour frame buffer device 100x37 Jul 7 00:00:33.978981 kernel: fb0: EFI VGA frame buffer device Jul 7 00:00:33.978997 kernel: pstore: Using crash dump compression: deflate Jul 7 00:00:33.979016 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 00:00:33.979032 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:00:33.979047 kernel: Segment Routing with IPv6 Jul 7 00:00:33.979063 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:00:33.979079 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:00:33.979094 kernel: Key type dns_resolver registered Jul 7 00:00:33.979110 kernel: IPI shorthand broadcast: enabled Jul 7 00:00:33.979148 kernel: sched_clock: Marking stable (499001696, 127397091)->(689078293, -62679506) Jul 7 00:00:33.979168 kernel: registered taskstats version 1 Jul 7 00:00:33.979188 kernel: Loading compiled-in X.509 certificates Jul 7 00:00:33.979205 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 00:00:33.979220 kernel: Key type .fscrypt registered Jul 7 00:00:33.979234 kernel: Key type fscrypt-provisioning registered Jul 7 00:00:33.979250 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:00:33.979263 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:00:33.979276 kernel: ima: No architecture policies found Jul 7 00:00:33.979291 kernel: clk: Disabling unused clocks Jul 7 00:00:33.979307 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 00:00:33.979329 kernel: Write protecting the kernel read-only data: 36864k Jul 7 00:00:33.979348 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 00:00:33.979366 kernel: Run /init as init process Jul 7 00:00:33.979385 kernel: with arguments: Jul 7 00:00:33.979403 kernel: /init Jul 7 00:00:33.979421 kernel: with environment: Jul 7 00:00:33.979445 kernel: HOME=/ Jul 7 00:00:33.979459 kernel: TERM=linux Jul 7 00:00:33.979472 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:00:33.979496 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:00:33.979515 systemd[1]: Detected virtualization amazon. Jul 7 00:00:33.979532 systemd[1]: Detected architecture x86-64. Jul 7 00:00:33.979547 systemd[1]: Running in initrd. Jul 7 00:00:33.979563 systemd[1]: No hostname configured, using default hostname. Jul 7 00:00:33.979578 systemd[1]: Hostname set to . Jul 7 00:00:33.979594 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:00:33.979614 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:00:33.979630 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:00:33.979646 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:00:33.979662 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:00:33.979678 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:00:33.979708 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:00:33.979726 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:00:33.979750 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:00:33.979769 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:00:33.979787 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:00:33.979805 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:00:33.979822 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:00:33.979842 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:00:33.979860 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:00:33.979878 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:00:33.979895 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:00:33.979914 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:00:33.979930 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:00:33.979945 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 00:00:33.979960 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:00:33.979978 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:00:33.979993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:00:33.980010 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:00:33.980028 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:00:33.980047 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:00:33.980064 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:00:33.980082 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:00:33.980096 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:00:33.980111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:00:33.980129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:33.980178 systemd-journald[178]: Collecting audit messages is disabled. Jul 7 00:00:33.980212 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:00:33.980229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:00:33.980246 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:00:33.980264 systemd-journald[178]: Journal started Jul 7 00:00:33.980302 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2d4105b30e72d51838d926006d7a14) is 4.7M, max 38.2M, 33.4M free. Jul 7 00:00:33.990211 systemd-modules-load[179]: Inserted module 'overlay' Jul 7 00:00:33.993257 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:00:33.996727 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:00:33.997440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:33.999239 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:00:34.007594 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:00:34.013276 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:00:34.018569 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:00:34.029389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:00:34.040330 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:00:34.043853 systemd-modules-load[179]: Inserted module 'br_netfilter' Jul 7 00:00:34.045435 kernel: Bridge firewalling registered Jul 7 00:00:34.047027 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:00:34.049937 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:34.057013 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:00:34.060792 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:00:34.063579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:00:34.065952 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:00:34.083249 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:00:34.089609 dracut-cmdline[205]: dracut-dracut-053 Jul 7 00:00:34.095211 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:00:34.093983 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:00:34.143867 systemd-resolved[219]: Positive Trust Anchors: Jul 7 00:00:34.144853 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:00:34.144921 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:00:34.152575 systemd-resolved[219]: Defaulting to hostname 'linux'. Jul 7 00:00:34.156467 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:00:34.157197 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:00:34.189742 kernel: SCSI subsystem initialized Jul 7 00:00:34.201726 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:00:34.213728 kernel: iscsi: registered transport (tcp) Jul 7 00:00:34.234904 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:00:34.234983 kernel: QLogic iSCSI HBA Driver Jul 7 00:00:34.277429 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:00:34.287930 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:00:34.314789 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:00:34.314868 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:00:34.315740 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 00:00:34.359740 kernel: raid6: avx512x4 gen() 17812 MB/s Jul 7 00:00:34.377721 kernel: raid6: avx512x2 gen() 17217 MB/s Jul 7 00:00:34.395726 kernel: raid6: avx512x1 gen() 17816 MB/s Jul 7 00:00:34.413720 kernel: raid6: avx2x4 gen() 17712 MB/s Jul 7 00:00:34.431724 kernel: raid6: avx2x2 gen() 17802 MB/s Jul 7 00:00:34.449991 kernel: raid6: avx2x1 gen() 13659 MB/s Jul 7 00:00:34.450057 kernel: raid6: using algorithm avx512x1 gen() 17816 MB/s Jul 7 00:00:34.468978 kernel: raid6: .... xor() 21454 MB/s, rmw enabled Jul 7 00:00:34.469054 kernel: raid6: using avx512x2 recovery algorithm Jul 7 00:00:34.490735 kernel: xor: automatically using best checksumming function avx Jul 7 00:00:34.650757 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:00:34.661721 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:00:34.670919 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:00:34.684327 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jul 7 00:00:34.689490 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:00:34.697896 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:00:34.717650 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jul 7 00:00:34.748981 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:00:34.755978 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:00:34.809404 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:00:34.819049 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:00:34.838871 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:00:34.841319 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:00:34.842806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:00:34.844589 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:00:34.853022 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:00:34.874804 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:00:34.910362 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 7 00:00:34.910670 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 7 00:00:34.916740 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:00:34.919731 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 7 00:00:34.928773 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:00:34.928955 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:34.929880 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:00:34.932057 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:00:34.932275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:34.935785 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:34.946732 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:5b:cc:ba:a1:a7 Jul 7 00:00:34.946321 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:34.962199 kernel: AVX2 version of gcm_enc/dec engaged. Jul 7 00:00:34.962272 kernel: AES CTR mode by8 optimization enabled Jul 7 00:00:34.966450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:00:34.967078 (udev-worker)[453]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:00:34.967641 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:34.979737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:35.000756 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 7 00:00:35.005636 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 7 00:00:35.016075 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:35.022717 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 00:00:35.023031 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:00:35.029994 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:00:35.030068 kernel: GPT:9289727 != 16777215 Jul 7 00:00:35.031031 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:00:35.031077 kernel: GPT:9289727 != 16777215 Jul 7 00:00:35.031097 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:00:35.031129 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:00:35.054598 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:35.121720 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (441) Jul 7 00:00:35.125742 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (450) Jul 7 00:00:35.172932 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 7 00:00:35.198128 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 7 00:00:35.200467 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 7 00:00:35.215573 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 7 00:00:35.222497 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 00:00:35.232918 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:00:35.242059 disk-uuid[631]: Primary Header is updated. Jul 7 00:00:35.242059 disk-uuid[631]: Secondary Entries is updated. Jul 7 00:00:35.242059 disk-uuid[631]: Secondary Header is updated. Jul 7 00:00:35.247724 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:00:35.253730 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:00:35.261062 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:00:36.266748 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:00:36.267298 disk-uuid[632]: The operation has completed successfully. Jul 7 00:00:36.398010 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:00:36.398140 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:00:36.424976 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:00:36.430351 sh[973]: Success Jul 7 00:00:36.453039 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 7 00:00:36.537710 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:00:36.544826 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:00:36.548108 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:00:36.572961 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 00:00:36.573021 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:00:36.576101 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 00:00:36.576160 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 00:00:36.577366 kernel: BTRFS info (device dm-0): using free space tree Jul 7 00:00:36.654719 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 7 00:00:36.668841 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:00:36.669889 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:00:36.673877 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:00:36.677866 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:00:36.698710 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:00:36.698784 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:00:36.701650 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 00:00:36.717760 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 00:00:36.732715 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:00:36.732728 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 00:00:36.739884 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:00:36.745919 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:00:36.779926 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:00:36.788953 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:00:36.809937 systemd-networkd[1165]: lo: Link UP Jul 7 00:00:36.809949 systemd-networkd[1165]: lo: Gained carrier Jul 7 00:00:36.811811 systemd-networkd[1165]: Enumeration completed Jul 7 00:00:36.812288 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:36.812294 systemd-networkd[1165]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:00:36.813455 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:00:36.814842 systemd[1]: Reached target network.target - Network. Jul 7 00:00:36.816544 systemd-networkd[1165]: eth0: Link UP Jul 7 00:00:36.816550 systemd-networkd[1165]: eth0: Gained carrier Jul 7 00:00:36.816565 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:36.827815 systemd-networkd[1165]: eth0: DHCPv4 address 172.31.20.165/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 00:00:37.103232 ignition[1118]: Ignition 2.19.0 Jul 7 00:00:37.103253 ignition[1118]: Stage: fetch-offline Jul 7 00:00:37.103586 ignition[1118]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:37.103600 ignition[1118]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:00:37.108525 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:00:37.104017 ignition[1118]: Ignition finished successfully Jul 7 00:00:37.114937 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:00:37.133563 ignition[1173]: Ignition 2.19.0 Jul 7 00:00:37.133578 ignition[1173]: Stage: fetch Jul 7 00:00:37.134090 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:37.134105 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:00:37.134237 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:00:37.195670 ignition[1173]: PUT result: OK Jul 7 00:00:37.202447 ignition[1173]: parsed url from cmdline: "" Jul 7 00:00:37.202457 ignition[1173]: no config URL provided Jul 7 00:00:37.202466 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:00:37.202479 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:00:37.202499 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:00:37.203241 ignition[1173]: PUT result: OK Jul 7 00:00:37.203310 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 7 00:00:37.204366 ignition[1173]: GET result: OK Jul 7 00:00:37.205076 ignition[1173]: parsing config with SHA512: 979db1361f582f4dd8a3b0a12b3e2fe47063b81ecb4fb33dc4ce562e4dda839582d1cf7379f1835ed2f5eb6445f1ec4cfd912c26cf8781b72de7679323c3c600 Jul 7 00:00:37.210182 unknown[1173]: fetched base config from "system" Jul 7 00:00:37.210196 unknown[1173]: fetched base config from "system" Jul 7 00:00:37.210204 unknown[1173]: fetched user config from "aws" Jul 7 00:00:37.213674 ignition[1173]: fetch: fetch complete Jul 7 00:00:37.214379 ignition[1173]: fetch: fetch passed Jul 7 00:00:37.214472 ignition[1173]: Ignition finished successfully Jul 7 00:00:37.216974 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:00:37.223004 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:00:37.244946 ignition[1180]: Ignition 2.19.0 Jul 7 00:00:37.244959 ignition[1180]: Stage: kargs Jul 7 00:00:37.245469 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:37.245483 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:00:37.245606 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:00:37.246668 ignition[1180]: PUT result: OK Jul 7 00:00:37.249601 ignition[1180]: kargs: kargs passed Jul 7 00:00:37.249680 ignition[1180]: Ignition finished successfully Jul 7 00:00:37.251188 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:00:37.256995 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:00:37.273653 ignition[1186]: Ignition 2.19.0 Jul 7 00:00:37.273666 ignition[1186]: Stage: disks Jul 7 00:00:37.274198 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:37.274213 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:00:37.274334 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:00:37.275225 ignition[1186]: PUT result: OK Jul 7 00:00:37.278086 ignition[1186]: disks: disks passed Jul 7 00:00:37.278167 ignition[1186]: Ignition finished successfully Jul 7 00:00:37.280208 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:00:37.280913 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:00:37.281272 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:00:37.281831 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:00:37.282348 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:00:37.282913 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:00:37.287921 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:00:37.328730 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 00:00:37.331686 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:00:37.337881 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:00:37.438738 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 00:00:37.439324 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:00:37.441020 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:00:37.459911 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:00:37.463347 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:00:37.465742 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 00:00:37.466099 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:00:37.466135 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:00:37.481123 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:00:37.486727 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1213) Jul 7 00:00:37.489278 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:00:37.494819 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:00:37.494857 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:00:37.494878 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 00:00:37.503721 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 00:00:37.504130 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:00:37.798294 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:00:37.841032 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:00:37.858095 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:00:37.862921 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:00:38.097368 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:00:38.106886 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:00:38.108920 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:00:38.117717 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:00:38.121085 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:00:38.151409 ignition[1325]: INFO : Ignition 2.19.0 Jul 7 00:00:38.151409 ignition[1325]: INFO : Stage: mount Jul 7 00:00:38.152560 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:38.152988 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:00:38.152988 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:00:38.153981 ignition[1325]: INFO : PUT result: OK Jul 7 00:00:38.157332 ignition[1325]: INFO : mount: mount passed Jul 7 00:00:38.157853 ignition[1325]: INFO : Ignition finished successfully Jul 7 00:00:38.158558 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:00:38.160982 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:00:38.166880 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:00:38.187030 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:00:38.208734 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1338) Jul 7 00:00:38.212666 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:00:38.212997 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:00:38.213021 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 00:00:38.219719 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 00:00:38.222332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:00:38.243053 ignition[1354]: INFO : Ignition 2.19.0 Jul 7 00:00:38.243053 ignition[1354]: INFO : Stage: files Jul 7 00:00:38.244629 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:38.244629 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:00:38.244629 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:00:38.245947 ignition[1354]: INFO : PUT result: OK Jul 7 00:00:38.248030 ignition[1354]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:00:38.249481 ignition[1354]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:00:38.249481 ignition[1354]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:00:38.266569 ignition[1354]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:00:38.267380 ignition[1354]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:00:38.267380 ignition[1354]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:00:38.266966 unknown[1354]: wrote ssh authorized keys file for user: core Jul 7 00:00:38.281435 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:00:38.281435 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 00:00:38.357829 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:00:38.462825 systemd-networkd[1165]: eth0: Gained IPv6LL Jul 7 00:00:38.546249 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:00:38.546249 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:00:38.548061 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 00:00:39.116494 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 00:00:39.659851 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:00:39.659851 ignition[1354]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 00:00:39.661917 ignition[1354]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:00:39.661917 ignition[1354]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:00:39.661917 ignition[1354]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 00:00:39.661917 ignition[1354]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:00:39.661917 ignition[1354]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:00:39.661917 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:00:39.666869 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:00:39.666869 ignition[1354]: INFO : files: files passed Jul 7 00:00:39.666869 ignition[1354]: INFO : Ignition finished successfully Jul 7 00:00:39.664271 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:00:39.678012 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:00:39.681926 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:00:39.684929 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:00:39.685889 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:00:39.700082 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:39.700082 initrd-setup-root-after-ignition[1384]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:39.704084 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:39.704575 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:00:39.706262 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:00:39.712921 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:00:39.749676 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:00:39.749830 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:00:39.751203 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:00:39.752475 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:00:39.753406 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:00:39.759942 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:00:39.773395 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:00:39.778940 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:00:39.791968 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:00:39.792784 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:00:39.793840 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:00:39.794746 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:00:39.794934 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:00:39.796257 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:00:39.797122 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:00:39.797914 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:00:39.798666 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:00:39.799538 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:00:39.800363 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:00:39.801130 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:00:39.801901 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:00:39.803033 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:00:39.803881 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:00:39.804581 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:00:39.804786 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:00:39.805867 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:00:39.806639 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:00:39.807371 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:00:39.807615 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:00:39.808288 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:00:39.808469 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:00:39.809821 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:00:39.810007 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:00:39.810711 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:00:39.810866 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:00:39.823728 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:00:39.827070 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:00:39.828912 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:00:39.829244 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:00:39.831320 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:00:39.831933 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:00:39.844622 ignition[1408]: INFO : Ignition 2.19.0 Jul 7 00:00:39.844622 ignition[1408]: INFO : Stage: umount Jul 7 00:00:39.844622 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:39.844622 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:00:39.844622 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:00:39.850184 ignition[1408]: INFO : PUT result: OK Jul 7 00:00:39.847665 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:00:39.847815 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:00:39.855198 ignition[1408]: INFO : umount: umount passed Jul 7 00:00:39.855785 ignition[1408]: INFO : Ignition finished successfully Jul 7 00:00:39.856898 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:00:39.857070 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:00:39.858506 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:00:39.858626 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:00:39.859283 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:00:39.860760 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:00:39.861377 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:00:39.861439 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:00:39.861941 systemd[1]: Stopped target network.target - Network. Jul 7 00:00:39.862394 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:00:39.862459 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:00:39.862938 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:00:39.863664 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:00:39.867802 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:00:39.868351 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:00:39.868900 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:00:39.869373 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:00:39.869433 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:00:39.869971 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:00:39.870024 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:00:39.870536 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:00:39.870632 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:00:39.872257 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:00:39.872335 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:00:39.873926 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:00:39.875923 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:00:39.878066 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:00:39.878741 systemd-networkd[1165]: eth0: DHCPv6 lease lost Jul 7 00:00:39.880555 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:00:39.880662 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:00:39.881993 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:00:39.882073 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:00:39.887840 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:00:39.889110 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:00:39.889801 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:00:39.892209 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:00:39.893435 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:00:39.893578 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:00:39.904437 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:00:39.904578 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:00:39.906486 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:00:39.906538 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:00:39.907082 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:00:39.907156 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:00:39.909963 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:00:39.910188 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:00:39.913154 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:00:39.913249 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:00:39.914791 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:00:39.914836 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:00:39.915187 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:00:39.915231 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:00:39.916123 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:00:39.916188 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:00:39.917219 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:00:39.917280 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:39.925947 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:00:39.926580 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:00:39.926668 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:00:39.927559 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 00:00:39.927630 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:00:39.929849 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:00:39.929915 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:00:39.930497 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:00:39.930555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:39.932806 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:00:39.932979 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:00:39.936182 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:00:39.936334 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:00:40.154616 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:00:40.154845 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:00:40.156400 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:00:40.157038 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:00:40.157120 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:00:40.163899 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:00:40.173174 systemd[1]: Switching root. Jul 7 00:00:40.204195 systemd-journald[178]: Journal stopped Jul 7 00:00:42.006447 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jul 7 00:00:42.006571 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:00:42.006604 kernel: SELinux: policy capability open_perms=1 Jul 7 00:00:42.006635 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:00:42.006659 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:00:42.006682 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:00:42.012084 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:00:42.012127 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:00:42.012152 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:00:42.012182 kernel: audit: type=1403 audit(1751846440.776:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:00:42.012209 systemd[1]: Successfully loaded SELinux policy in 76.494ms. Jul 7 00:00:42.012254 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.712ms. Jul 7 00:00:42.012283 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:00:42.012310 systemd[1]: Detected virtualization amazon. Jul 7 00:00:42.012337 systemd[1]: Detected architecture x86-64. Jul 7 00:00:42.012361 systemd[1]: Detected first boot. Jul 7 00:00:42.012386 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:00:42.012409 zram_generator::config[1451]: No configuration found. Jul 7 00:00:42.012435 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:00:42.012466 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:00:42.012492 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:00:42.012517 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:00:42.012544 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:00:42.012568 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:00:42.012594 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:00:42.012619 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:00:42.012645 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:00:42.012670 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:00:42.015219 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:00:42.015271 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:00:42.015299 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:00:42.015327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:00:42.015353 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:00:42.015378 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:00:42.015403 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:00:42.015441 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:00:42.015467 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:00:42.015502 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:00:42.015527 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:00:42.015549 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:00:42.015577 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:00:42.015603 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:00:42.015629 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:00:42.015661 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:00:42.015814 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:00:42.015845 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:00:42.015872 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:00:42.015896 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:00:42.015921 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:00:42.015947 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:00:42.015973 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:00:42.015999 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:00:42.016024 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:00:42.016048 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:00:42.016080 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:00:42.016108 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:42.016134 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:00:42.016158 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:00:42.016183 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:00:42.016209 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:00:42.016235 systemd[1]: Reached target machines.target - Containers. Jul 7 00:00:42.016263 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:00:42.016293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:00:42.016320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:00:42.016344 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:00:42.016369 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:00:42.016395 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:00:42.016422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:00:42.016449 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:00:42.016475 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:00:42.016503 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:00:42.016529 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:00:42.016554 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:00:42.016580 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:00:42.016606 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:00:42.016632 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:00:42.016658 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:00:42.016685 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:00:42.021562 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:00:42.021611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:00:42.021637 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:00:42.021664 systemd[1]: Stopped verity-setup.service. Jul 7 00:00:42.021705 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:42.021743 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:00:42.021766 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:00:42.021792 kernel: ACPI: bus type drm_connector registered Jul 7 00:00:42.021819 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:00:42.021844 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:00:42.021876 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:00:42.021901 kernel: fuse: init (API version 7.39) Jul 7 00:00:42.021924 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:00:42.021951 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:00:42.021977 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:00:42.022007 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:00:42.022031 kernel: loop: module loaded Jul 7 00:00:42.022054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:00:42.022078 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:00:42.022104 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:00:42.022138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:00:42.022164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:00:42.022189 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:00:42.022213 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:00:42.022283 systemd-journald[1533]: Collecting audit messages is disabled. Jul 7 00:00:42.022339 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:00:42.022366 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:00:42.022390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:00:42.022420 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:00:42.022448 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:00:42.022475 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:00:42.022501 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:00:42.022528 systemd-journald[1533]: Journal started Jul 7 00:00:42.022574 systemd-journald[1533]: Runtime Journal (/run/log/journal/ec2d4105b30e72d51838d926006d7a14) is 4.7M, max 38.2M, 33.4M free. Jul 7 00:00:41.587747 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:00:41.620652 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 00:00:41.621169 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:00:42.035499 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:00:42.046736 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:00:42.054231 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:00:42.059722 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:00:42.064755 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 00:00:42.074716 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:00:42.078712 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:00:42.078791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:00:42.090954 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:00:42.103719 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:00:42.103813 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:00:42.103841 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:00:42.115745 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:00:42.122825 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:00:42.135718 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:00:42.143710 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:00:42.150867 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:00:42.159774 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:00:42.162889 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:00:42.163861 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:00:42.164935 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:00:42.167782 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:00:42.218179 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:00:42.223733 kernel: loop0: detected capacity change from 0 to 142488 Jul 7 00:00:42.221132 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:00:42.232297 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:00:42.239555 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 00:00:42.244318 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 00:00:42.252043 systemd-journald[1533]: Time spent on flushing to /var/log/journal/ec2d4105b30e72d51838d926006d7a14 is 90.167ms for 992 entries. Jul 7 00:00:42.252043 systemd-journald[1533]: System Journal (/var/log/journal/ec2d4105b30e72d51838d926006d7a14) is 8.0M, max 195.6M, 187.6M free. Jul 7 00:00:42.353989 systemd-journald[1533]: Received client request to flush runtime journal. Jul 7 00:00:42.354100 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:00:42.263012 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Jul 7 00:00:42.263036 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Jul 7 00:00:42.275239 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:00:42.287215 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:00:42.313366 udevadm[1594]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 00:00:42.356622 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:00:42.371741 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:00:42.373354 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 00:00:42.382719 kernel: loop1: detected capacity change from 0 to 140768 Jul 7 00:00:42.415685 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:00:42.424108 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:00:42.448909 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Jul 7 00:00:42.449249 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Jul 7 00:00:42.454658 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:00:42.467731 kernel: loop2: detected capacity change from 0 to 224512 Jul 7 00:00:42.585779 kernel: loop3: detected capacity change from 0 to 61336 Jul 7 00:00:42.694020 kernel: loop4: detected capacity change from 0 to 142488 Jul 7 00:00:42.743725 kernel: loop5: detected capacity change from 0 to 140768 Jul 7 00:00:42.780299 kernel: loop6: detected capacity change from 0 to 224512 Jul 7 00:00:42.817728 kernel: loop7: detected capacity change from 0 to 61336 Jul 7 00:00:42.837526 (sd-merge)[1612]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 7 00:00:42.838246 (sd-merge)[1612]: Merged extensions into '/usr'. Jul 7 00:00:42.844045 systemd[1]: Reloading requested from client PID 1562 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:00:42.844078 systemd[1]: Reloading... Jul 7 00:00:42.977727 zram_generator::config[1638]: No configuration found. Jul 7 00:00:43.197558 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:00:43.276970 systemd[1]: Reloading finished in 432 ms. Jul 7 00:00:43.310029 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:00:43.310862 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:00:43.318948 systemd[1]: Starting ensure-sysext.service... Jul 7 00:00:43.324547 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:00:43.331924 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:00:43.343930 systemd[1]: Reloading requested from client PID 1690 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:00:43.343957 systemd[1]: Reloading... Jul 7 00:00:43.372244 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:00:43.374861 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:00:43.377334 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:00:43.378934 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Jul 7 00:00:43.379034 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Jul 7 00:00:43.390253 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:00:43.390269 systemd-tmpfiles[1691]: Skipping /boot Jul 7 00:00:43.427406 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:00:43.427438 systemd-tmpfiles[1691]: Skipping /boot Jul 7 00:00:43.434686 systemd-udevd[1692]: Using default interface naming scheme 'v255'. Jul 7 00:00:43.470716 zram_generator::config[1722]: No configuration found. Jul 7 00:00:43.669762 ldconfig[1558]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:00:43.691884 (udev-worker)[1779]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:00:43.739756 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:00:43.814720 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 00:00:43.838723 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 7 00:00:43.852732 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jul 7 00:00:43.861486 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:00:43.867712 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 7 00:00:43.875574 kernel: ACPI: button: Sleep Button [SLPF] Jul 7 00:00:43.875671 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1759) Jul 7 00:00:43.914117 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:00:43.914374 systemd[1]: Reloading finished in 569 ms. Jul 7 00:00:43.938749 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:00:43.941550 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:00:43.943786 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:00:43.968721 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:00:44.026471 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:44.035090 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:00:44.038014 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:00:44.039544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:00:44.042572 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:00:44.046832 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:00:44.057856 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:00:44.067090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:00:44.068744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:00:44.074836 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:00:44.079922 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:00:44.085119 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:00:44.086771 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:00:44.095120 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:00:44.098515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:44.099203 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:44.104405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:00:44.105892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:00:44.116512 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:00:44.117747 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:00:44.130945 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 00:00:44.132833 systemd[1]: Finished ensure-sysext.service. Jul 7 00:00:44.145198 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:00:44.145406 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:00:44.150373 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:00:44.151285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:00:44.184464 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:00:44.186049 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:00:44.186273 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:00:44.197929 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:00:44.219199 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:00:44.237531 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 00:00:44.250124 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 00:00:44.251742 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:00:44.285721 lvm[1918]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:00:44.288966 augenrules[1921]: No rules Jul 7 00:00:44.293829 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:00:44.309107 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:00:44.323099 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:00:44.323961 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:00:44.325482 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 00:00:44.328178 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:00:44.338991 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 00:00:44.346799 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:00:44.347847 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:00:44.355862 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:00:44.378714 lvm[1934]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:00:44.414420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:44.416232 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 00:00:44.456045 systemd-networkd[1882]: lo: Link UP Jul 7 00:00:44.456058 systemd-networkd[1882]: lo: Gained carrier Jul 7 00:00:44.457872 systemd-networkd[1882]: Enumeration completed Jul 7 00:00:44.458027 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:00:44.459741 systemd-networkd[1882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:44.459749 systemd-networkd[1882]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:00:44.465064 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:00:44.467484 systemd-networkd[1882]: eth0: Link UP Jul 7 00:00:44.467775 systemd-networkd[1882]: eth0: Gained carrier Jul 7 00:00:44.467807 systemd-networkd[1882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:44.477502 systemd-resolved[1886]: Positive Trust Anchors: Jul 7 00:00:44.477519 systemd-resolved[1886]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:00:44.477574 systemd-resolved[1886]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:00:44.479826 systemd-networkd[1882]: eth0: DHCPv4 address 172.31.20.165/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 00:00:44.494053 systemd-resolved[1886]: Defaulting to hostname 'linux'. Jul 7 00:00:44.496273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:00:44.496875 systemd[1]: Reached target network.target - Network. Jul 7 00:00:44.497331 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:00:44.497782 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:00:44.498279 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:00:44.498732 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:00:44.499279 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:00:44.500069 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:00:44.500463 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:00:44.500867 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:00:44.500912 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:00:44.501276 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:00:44.502849 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:00:44.504921 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:00:44.515165 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:00:44.516904 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:00:44.517493 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:00:44.517973 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:00:44.518406 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:00:44.518447 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:00:44.520041 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:00:44.524952 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:00:44.529229 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:00:44.535548 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:00:44.539948 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:00:44.540568 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:00:44.543383 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:00:44.548986 systemd[1]: Started ntpd.service - Network Time Service. Jul 7 00:00:44.552832 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:00:44.561980 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 7 00:00:44.564914 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:00:44.574060 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:00:44.583998 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:00:44.585063 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:00:44.586306 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:00:44.588924 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:00:44.613418 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:00:44.651505 jq[1950]: false Jul 7 00:00:44.662222 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:00:44.662510 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:00:44.680620 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:00:44.686954 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:00:44.704733 jq[1960]: true Jul 7 00:00:44.718769 update_engine[1959]: I20250707 00:00:44.714434 1959 main.cc:92] Flatcar Update Engine starting Jul 7 00:00:44.741560 extend-filesystems[1951]: Found loop4 Jul 7 00:00:44.744361 extend-filesystems[1951]: Found loop5 Jul 7 00:00:44.744361 extend-filesystems[1951]: Found loop6 Jul 7 00:00:44.744361 extend-filesystems[1951]: Found loop7 Jul 7 00:00:44.744361 extend-filesystems[1951]: Found nvme0n1 Jul 7 00:00:44.744361 extend-filesystems[1951]: Found nvme0n1p1 Jul 7 00:00:44.744361 extend-filesystems[1951]: Found nvme0n1p2 Jul 7 00:00:44.744361 extend-filesystems[1951]: Found nvme0n1p3 Jul 7 00:00:44.744361 extend-filesystems[1951]: Found usr Jul 7 00:00:44.744361 extend-filesystems[1951]: Found nvme0n1p4 Jul 7 00:00:44.744361 extend-filesystems[1951]: Found nvme0n1p6 Jul 7 00:00:44.744361 extend-filesystems[1951]: Found nvme0n1p7 Jul 7 00:00:44.744361 extend-filesystems[1951]: Found nvme0n1p9 Jul 7 00:00:44.744361 extend-filesystems[1951]: Checking size of /dev/nvme0n1p9 Jul 7 00:00:44.765211 jq[1976]: true Jul 7 00:00:44.758325 dbus-daemon[1949]: [system] SELinux support is enabled Jul 7 00:00:44.765627 tar[1967]: linux-amd64/LICENSE Jul 7 00:00:44.765627 tar[1967]: linux-amd64/helm Jul 7 00:00:44.752168 (ntainerd)[1969]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:00:44.763282 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:00:44.775325 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:00:44.775400 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:00:44.776006 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:00:44.776042 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:00:44.789967 dbus-daemon[1949]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1882 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 00:00:44.798478 update_engine[1959]: I20250707 00:00:44.794878 1959 update_check_scheduler.cc:74] Next update check in 7m53s Jul 7 00:00:44.797611 systemd-logind[1958]: Watching system buttons on /dev/input/event1 (Power Button) Jul 7 00:00:44.797634 systemd-logind[1958]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 7 00:00:44.797658 systemd-logind[1958]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:00:44.799231 systemd-logind[1958]: New seat seat0. Jul 7 00:00:44.801773 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:38 UTC 2025 (1): Starting Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:38 UTC 2025 (1): Starting Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: ---------------------------------------------------- Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: corporation. Support and training for ntp-4 are Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: available at https://www.nwtime.org/support Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: ---------------------------------------------------- Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: proto: precision = 0.089 usec (-23) Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: basedate set to 2025-06-24 Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: gps base set to 2025-06-29 (week 2373) Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: Listen normally on 3 eth0 172.31.20.165:123 Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: Listen normally on 4 lo [::1]:123 Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: bind(21) AF_INET6 fe80::45b:ccff:feba:a1a7%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: unable to create socket on eth0 (5) for fe80::45b:ccff:feba:a1a7%2#123 Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: failed to init interface for address fe80::45b:ccff:feba:a1a7%2 Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:00:44.822375 ntpd[1953]: 7 Jul 00:00:44 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:00:44.801810 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 00:00:44.806251 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:00:44.826489 extend-filesystems[1951]: Resized partition /dev/nvme0n1p9 Jul 7 00:00:44.853661 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 7 00:00:44.801822 ntpd[1953]: ---------------------------------------------------- Jul 7 00:00:44.851762 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:00:44.853848 coreos-metadata[1948]: Jul 07 00:00:44.852 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 00:00:44.854822 extend-filesystems[1999]: resize2fs 1.47.1 (20-May-2024) Jul 7 00:00:44.801832 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Jul 7 00:00:44.852012 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:00:44.801842 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 00:00:44.859439 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 7 00:00:44.801853 ntpd[1953]: corporation. Support and training for ntp-4 are Jul 7 00:00:44.801865 ntpd[1953]: available at https://www.nwtime.org/support Jul 7 00:00:44.801875 ntpd[1953]: ---------------------------------------------------- Jul 7 00:00:44.805414 ntpd[1953]: proto: precision = 0.089 usec (-23) Jul 7 00:00:44.806426 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:00:44.807255 ntpd[1953]: basedate set to 2025-06-24 Jul 7 00:00:44.807278 ntpd[1953]: gps base set to 2025-06-29 (week 2373) Jul 7 00:00:44.810219 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 00:00:44.810274 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 00:00:44.810535 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 00:00:44.810579 ntpd[1953]: Listen normally on 3 eth0 172.31.20.165:123 Jul 7 00:00:44.810625 ntpd[1953]: Listen normally on 4 lo [::1]:123 Jul 7 00:00:44.810672 ntpd[1953]: bind(21) AF_INET6 fe80::45b:ccff:feba:a1a7%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:00:44.810714 ntpd[1953]: unable to create socket on eth0 (5) for fe80::45b:ccff:feba:a1a7%2#123 Jul 7 00:00:44.810730 ntpd[1953]: failed to init interface for address fe80::45b:ccff:feba:a1a7%2 Jul 7 00:00:44.810765 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Jul 7 00:00:44.812370 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:00:44.812400 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:00:44.868996 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:00:44.873972 coreos-metadata[1948]: Jul 07 00:00:44.873 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 7 00:00:44.881768 coreos-metadata[1948]: Jul 07 00:00:44.878 INFO Fetch successful Jul 7 00:00:44.881768 coreos-metadata[1948]: Jul 07 00:00:44.878 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 7 00:00:44.881768 coreos-metadata[1948]: Jul 07 00:00:44.879 INFO Fetch successful Jul 7 00:00:44.881768 coreos-metadata[1948]: Jul 07 00:00:44.881 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 7 00:00:44.882872 coreos-metadata[1948]: Jul 07 00:00:44.882 INFO Fetch successful Jul 7 00:00:44.883786 coreos-metadata[1948]: Jul 07 00:00:44.883 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 7 00:00:44.885739 coreos-metadata[1948]: Jul 07 00:00:44.884 INFO Fetch successful Jul 7 00:00:44.885739 coreos-metadata[1948]: Jul 07 00:00:44.884 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 7 00:00:44.885739 coreos-metadata[1948]: Jul 07 00:00:44.885 INFO Fetch failed with 404: resource not found Jul 7 00:00:44.885739 coreos-metadata[1948]: Jul 07 00:00:44.885 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 7 00:00:44.885993 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 00:00:44.891718 coreos-metadata[1948]: Jul 07 00:00:44.887 INFO Fetch successful Jul 7 00:00:44.891718 coreos-metadata[1948]: Jul 07 00:00:44.888 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 7 00:00:44.891718 coreos-metadata[1948]: Jul 07 00:00:44.891 INFO Fetch successful Jul 7 00:00:44.891718 coreos-metadata[1948]: Jul 07 00:00:44.891 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 7 00:00:44.895066 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:00:44.896490 coreos-metadata[1948]: Jul 07 00:00:44.896 INFO Fetch successful Jul 7 00:00:44.899155 coreos-metadata[1948]: Jul 07 00:00:44.896 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 7 00:00:44.900561 coreos-metadata[1948]: Jul 07 00:00:44.899 INFO Fetch successful Jul 7 00:00:44.900561 coreos-metadata[1948]: Jul 07 00:00:44.899 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 7 00:00:44.905878 coreos-metadata[1948]: Jul 07 00:00:44.905 INFO Fetch successful Jul 7 00:00:44.961728 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 7 00:00:44.979476 extend-filesystems[1999]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 7 00:00:44.979476 extend-filesystems[1999]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 00:00:44.979476 extend-filesystems[1999]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 7 00:00:44.983743 extend-filesystems[1951]: Resized filesystem in /dev/nvme0n1p9 Jul 7 00:00:44.985499 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:00:44.986902 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:00:45.015777 bash[2020]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:00:45.018724 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:00:45.033054 systemd[1]: Starting sshkeys.service... Jul 7 00:00:45.047339 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:00:45.048450 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:00:45.084519 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:00:45.095932 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:00:45.154028 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1767) Jul 7 00:00:45.410262 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 00:00:45.410449 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 00:00:45.420019 dbus-daemon[1949]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2006 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 00:00:45.458125 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 00:00:45.517820 coreos-metadata[2034]: Jul 07 00:00:45.517 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 00:00:45.524801 containerd[1969]: time="2025-07-07T00:00:45.523983115Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 00:00:45.526033 coreos-metadata[2034]: Jul 07 00:00:45.525 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 7 00:00:45.528149 coreos-metadata[2034]: Jul 07 00:00:45.528 INFO Fetch successful Jul 7 00:00:45.528650 coreos-metadata[2034]: Jul 07 00:00:45.528 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 00:00:45.529210 coreos-metadata[2034]: Jul 07 00:00:45.529 INFO Fetch successful Jul 7 00:00:45.537296 unknown[2034]: wrote ssh authorized keys file for user: core Jul 7 00:00:45.540723 polkitd[2131]: Started polkitd version 121 Jul 7 00:00:45.554953 locksmithd[2008]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:00:45.567582 polkitd[2131]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 00:00:45.567686 polkitd[2131]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 00:00:45.568395 polkitd[2131]: Finished loading, compiling and executing 2 rules Jul 7 00:00:45.570717 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 00:00:45.570939 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 00:00:45.573163 polkitd[2131]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 00:00:45.584772 update-ssh-keys[2142]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:00:45.587011 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:00:45.593555 systemd[1]: Finished sshkeys.service. Jul 7 00:00:45.612073 systemd-hostnamed[2006]: Hostname set to (transient) Jul 7 00:00:45.612363 systemd-resolved[1886]: System hostname changed to 'ip-172-31-20-165'. Jul 7 00:00:45.648725 containerd[1969]: time="2025-07-07T00:00:45.646484379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:45.652552 containerd[1969]: time="2025-07-07T00:00:45.652498834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:00:45.652552 containerd[1969]: time="2025-07-07T00:00:45.652549017Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 00:00:45.652728 containerd[1969]: time="2025-07-07T00:00:45.652572112Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 00:00:45.652796 containerd[1969]: time="2025-07-07T00:00:45.652774823Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 00:00:45.652836 containerd[1969]: time="2025-07-07T00:00:45.652805866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:45.652906 containerd[1969]: time="2025-07-07T00:00:45.652886042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:00:45.652957 containerd[1969]: time="2025-07-07T00:00:45.652909242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:45.653171 containerd[1969]: time="2025-07-07T00:00:45.653144570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:00:45.653217 containerd[1969]: time="2025-07-07T00:00:45.653173556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:45.653217 containerd[1969]: time="2025-07-07T00:00:45.653194233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:00:45.653217 containerd[1969]: time="2025-07-07T00:00:45.653210733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:45.653327 containerd[1969]: time="2025-07-07T00:00:45.653308749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:45.653579 containerd[1969]: time="2025-07-07T00:00:45.653556195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:45.653761 containerd[1969]: time="2025-07-07T00:00:45.653737301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:00:45.653812 containerd[1969]: time="2025-07-07T00:00:45.653764025Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 00:00:45.653886 containerd[1969]: time="2025-07-07T00:00:45.653866541Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 00:00:45.653950 containerd[1969]: time="2025-07-07T00:00:45.653934619Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:00:45.661526 containerd[1969]: time="2025-07-07T00:00:45.661428347Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 00:00:45.661526 containerd[1969]: time="2025-07-07T00:00:45.661502532Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 00:00:45.661662 containerd[1969]: time="2025-07-07T00:00:45.661527702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 00:00:45.661662 containerd[1969]: time="2025-07-07T00:00:45.661547199Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 00:00:45.661662 containerd[1969]: time="2025-07-07T00:00:45.661568885Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 00:00:45.661802 containerd[1969]: time="2025-07-07T00:00:45.661771928Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662119635Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662245047Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662265299Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662285163Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662303761Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662324701Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662342833Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662363633Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662384718Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662403915Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662422161Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662439800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662468703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.662721 containerd[1969]: time="2025-07-07T00:00:45.662489206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662506931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662540547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662559161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662578669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662596263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662616647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662635504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662656050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662675171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662714480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662732304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662753730Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662787900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662808252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663247 containerd[1969]: time="2025-07-07T00:00:45.662825521Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 00:00:45.663786 containerd[1969]: time="2025-07-07T00:00:45.662881730Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 00:00:45.663786 containerd[1969]: time="2025-07-07T00:00:45.662909160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 00:00:45.663786 containerd[1969]: time="2025-07-07T00:00:45.662924756Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 00:00:45.663786 containerd[1969]: time="2025-07-07T00:00:45.662944967Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 00:00:45.663786 containerd[1969]: time="2025-07-07T00:00:45.662960056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.663786 containerd[1969]: time="2025-07-07T00:00:45.662978478Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 00:00:45.663786 containerd[1969]: time="2025-07-07T00:00:45.662993669Z" level=info msg="NRI interface is disabled by configuration." Jul 7 00:00:45.663786 containerd[1969]: time="2025-07-07T00:00:45.663010201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 00:00:45.664091 containerd[1969]: time="2025-07-07T00:00:45.663429308Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 00:00:45.664091 containerd[1969]: time="2025-07-07T00:00:45.663517882Z" level=info msg="Connect containerd service" Jul 7 00:00:45.664091 containerd[1969]: time="2025-07-07T00:00:45.663570239Z" level=info msg="using legacy CRI server" Jul 7 00:00:45.664091 containerd[1969]: time="2025-07-07T00:00:45.663583343Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:00:45.667711 containerd[1969]: time="2025-07-07T00:00:45.665838979Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 00:00:45.667711 containerd[1969]: time="2025-07-07T00:00:45.667588639Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:00:45.667846 containerd[1969]: time="2025-07-07T00:00:45.667732099Z" level=info msg="Start subscribing containerd event" Jul 7 00:00:45.667846 containerd[1969]: time="2025-07-07T00:00:45.667785828Z" level=info msg="Start recovering state" Jul 7 00:00:45.668521 containerd[1969]: time="2025-07-07T00:00:45.668496396Z" level=info msg="Start event monitor" Jul 7 00:00:45.668584 containerd[1969]: time="2025-07-07T00:00:45.668529880Z" level=info msg="Start snapshots syncer" Jul 7 00:00:45.668584 containerd[1969]: time="2025-07-07T00:00:45.668544751Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:00:45.668584 containerd[1969]: time="2025-07-07T00:00:45.668556953Z" level=info msg="Start streaming server" Jul 7 00:00:45.669736 containerd[1969]: time="2025-07-07T00:00:45.668889316Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:00:45.669736 containerd[1969]: time="2025-07-07T00:00:45.668947093Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:00:45.669116 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:00:45.681106 containerd[1969]: time="2025-07-07T00:00:45.681055733Z" level=info msg="containerd successfully booted in 0.165404s" Jul 7 00:00:45.706212 sshd_keygen[1995]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:00:45.743229 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:00:45.754242 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:00:45.767230 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:00:45.767511 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:00:45.774452 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:00:45.797880 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:00:45.802399 ntpd[1953]: bind(24) AF_INET6 fe80::45b:ccff:feba:a1a7%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:00:45.808750 ntpd[1953]: 7 Jul 00:00:45 ntpd[1953]: bind(24) AF_INET6 fe80::45b:ccff:feba:a1a7%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:00:45.808750 ntpd[1953]: 7 Jul 00:00:45 ntpd[1953]: unable to create socket on eth0 (6) for fe80::45b:ccff:feba:a1a7%2#123 Jul 7 00:00:45.808750 ntpd[1953]: 7 Jul 00:00:45 ntpd[1953]: failed to init interface for address fe80::45b:ccff:feba:a1a7%2 Jul 7 00:00:45.807905 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:00:45.802442 ntpd[1953]: unable to create socket on eth0 (6) for fe80::45b:ccff:feba:a1a7%2#123 Jul 7 00:00:45.802460 ntpd[1953]: failed to init interface for address fe80::45b:ccff:feba:a1a7%2 Jul 7 00:00:45.819578 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:00:45.821035 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:00:45.886873 systemd-networkd[1882]: eth0: Gained IPv6LL Jul 7 00:00:45.891559 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:00:45.893109 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:00:45.903011 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 7 00:00:45.907026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:00:45.912160 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:00:45.970380 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:00:46.004537 amazon-ssm-agent[2169]: Initializing new seelog logger Jul 7 00:00:46.004537 amazon-ssm-agent[2169]: New Seelog Logger Creation Complete Jul 7 00:00:46.004537 amazon-ssm-agent[2169]: 2025/07/07 00:00:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:00:46.004537 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:00:46.004537 amazon-ssm-agent[2169]: 2025/07/07 00:00:46 processing appconfig overrides Jul 7 00:00:46.006734 amazon-ssm-agent[2169]: 2025/07/07 00:00:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:00:46.006734 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:00:46.006734 amazon-ssm-agent[2169]: 2025/07/07 00:00:46 processing appconfig overrides Jul 7 00:00:46.006734 amazon-ssm-agent[2169]: 2025/07/07 00:00:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:00:46.006734 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:00:46.006734 amazon-ssm-agent[2169]: 2025/07/07 00:00:46 processing appconfig overrides Jul 7 00:00:46.006734 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO Proxy environment variables: Jul 7 00:00:46.009716 amazon-ssm-agent[2169]: 2025/07/07 00:00:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:00:46.009716 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:00:46.009716 amazon-ssm-agent[2169]: 2025/07/07 00:00:46 processing appconfig overrides Jul 7 00:00:46.106846 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO https_proxy: Jul 7 00:00:46.177027 tar[1967]: linux-amd64/README.md Jul 7 00:00:46.198187 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:00:46.204945 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO http_proxy: Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO no_proxy: Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO Checking if agent identity type OnPrem can be assumed Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO Checking if agent identity type EC2 can be assumed Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO Agent will take identity from EC2 Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [amazon-ssm-agent] Starting Core Agent Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [Registrar] Starting registrar module Jul 7 00:00:46.262363 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 7 00:00:46.262747 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [EC2Identity] EC2 registration was successful. Jul 7 00:00:46.262747 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [CredentialRefresher] credentialRefresher has started Jul 7 00:00:46.262747 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [CredentialRefresher] Starting credentials refresher loop Jul 7 00:00:46.262747 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 7 00:00:46.303027 amazon-ssm-agent[2169]: 2025-07-07 00:00:46 INFO [CredentialRefresher] Next credential rotation will be in 30.89166101515 minutes Jul 7 00:00:47.275050 amazon-ssm-agent[2169]: 2025-07-07 00:00:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 7 00:00:47.375389 amazon-ssm-agent[2169]: 2025-07-07 00:00:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2192) started Jul 7 00:00:47.458180 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:00:47.468669 systemd[1]: Started sshd@0-172.31.20.165:22-147.75.109.163:57998.service - OpenSSH per-connection server daemon (147.75.109.163:57998). Jul 7 00:00:47.476360 amazon-ssm-agent[2169]: 2025-07-07 00:00:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 7 00:00:47.653450 sshd[2204]: Accepted publickey for core from 147.75.109.163 port 57998 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:47.656720 sshd[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:47.665950 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:00:47.671996 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:00:47.675741 systemd-logind[1958]: New session 1 of user core. Jul 7 00:00:47.688889 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:00:47.717250 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:00:47.723644 (systemd)[2208]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:00:47.864445 systemd[2208]: Queued start job for default target default.target. Jul 7 00:00:47.872256 systemd[2208]: Created slice app.slice - User Application Slice. Jul 7 00:00:47.872305 systemd[2208]: Reached target paths.target - Paths. Jul 7 00:00:47.872328 systemd[2208]: Reached target timers.target - Timers. Jul 7 00:00:47.875654 systemd[2208]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:00:47.889729 systemd[2208]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:00:47.889893 systemd[2208]: Reached target sockets.target - Sockets. Jul 7 00:00:47.889915 systemd[2208]: Reached target basic.target - Basic System. Jul 7 00:00:47.889971 systemd[2208]: Reached target default.target - Main User Target. Jul 7 00:00:47.890013 systemd[2208]: Startup finished in 159ms. Jul 7 00:00:47.891196 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:00:47.904957 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:00:48.050973 systemd[1]: Started sshd@1-172.31.20.165:22-147.75.109.163:58006.service - OpenSSH per-connection server daemon (147.75.109.163:58006). Jul 7 00:00:48.211036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:00:48.213318 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:00:48.214126 systemd[1]: Startup finished in 629ms (kernel) + 7.044s (initrd) + 7.511s (userspace) = 15.186s. Jul 7 00:00:48.219790 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:00:48.229549 sshd[2219]: Accepted publickey for core from 147.75.109.163 port 58006 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:48.232806 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:48.238816 systemd-logind[1958]: New session 2 of user core. Jul 7 00:00:48.244933 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:00:48.372679 sshd[2219]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:48.376214 systemd[1]: sshd@1-172.31.20.165:22-147.75.109.163:58006.service: Deactivated successfully. Jul 7 00:00:48.377964 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:00:48.378749 systemd-logind[1958]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:00:48.380677 systemd-logind[1958]: Removed session 2. Jul 7 00:00:48.410816 systemd[1]: Started sshd@2-172.31.20.165:22-147.75.109.163:58012.service - OpenSSH per-connection server daemon (147.75.109.163:58012). Jul 7 00:00:48.572824 sshd[2236]: Accepted publickey for core from 147.75.109.163 port 58012 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:48.575194 sshd[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:48.581339 systemd-logind[1958]: New session 3 of user core. Jul 7 00:00:48.588029 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:00:48.705570 sshd[2236]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:48.710319 systemd[1]: sshd@2-172.31.20.165:22-147.75.109.163:58012.service: Deactivated successfully. Jul 7 00:00:48.712640 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:00:48.714932 systemd-logind[1958]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:00:48.716228 systemd-logind[1958]: Removed session 3. Jul 7 00:00:48.735985 systemd[1]: Started sshd@3-172.31.20.165:22-147.75.109.163:58022.service - OpenSSH per-connection server daemon (147.75.109.163:58022). Jul 7 00:00:48.802252 ntpd[1953]: Listen normally on 7 eth0 [fe80::45b:ccff:feba:a1a7%2]:123 Jul 7 00:00:48.802638 ntpd[1953]: 7 Jul 00:00:48 ntpd[1953]: Listen normally on 7 eth0 [fe80::45b:ccff:feba:a1a7%2]:123 Jul 7 00:00:48.898301 sshd[2247]: Accepted publickey for core from 147.75.109.163 port 58022 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:48.899916 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:48.907222 systemd-logind[1958]: New session 4 of user core. Jul 7 00:00:48.909968 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:00:49.036290 sshd[2247]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:49.038797 systemd[1]: sshd@3-172.31.20.165:22-147.75.109.163:58022.service: Deactivated successfully. Jul 7 00:00:49.040440 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:00:49.042222 systemd-logind[1958]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:00:49.043296 systemd-logind[1958]: Removed session 4. Jul 7 00:00:49.075110 systemd[1]: Started sshd@4-172.31.20.165:22-147.75.109.163:58028.service - OpenSSH per-connection server daemon (147.75.109.163:58028). Jul 7 00:00:49.227624 sshd[2254]: Accepted publickey for core from 147.75.109.163 port 58028 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:49.229302 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:49.235016 systemd-logind[1958]: New session 5 of user core. Jul 7 00:00:49.240996 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:00:49.369766 kubelet[2226]: E0707 00:00:49.369713 2226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:00:49.370277 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:00:49.370616 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:00:49.372775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:00:49.372920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:00:49.373510 systemd[1]: kubelet.service: Consumed 1.098s CPU time. Jul 7 00:00:49.385528 sudo[2258]: pam_unix(sudo:session): session closed for user root Jul 7 00:00:49.408274 sshd[2254]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:49.413036 systemd[1]: sshd@4-172.31.20.165:22-147.75.109.163:58028.service: Deactivated successfully. Jul 7 00:00:49.414957 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:00:49.416026 systemd-logind[1958]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:00:49.417165 systemd-logind[1958]: Removed session 5. Jul 7 00:00:49.445120 systemd[1]: Started sshd@5-172.31.20.165:22-147.75.109.163:58036.service - OpenSSH per-connection server daemon (147.75.109.163:58036). Jul 7 00:00:49.601024 sshd[2264]: Accepted publickey for core from 147.75.109.163 port 58036 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:49.602944 sshd[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:49.608272 systemd-logind[1958]: New session 6 of user core. Jul 7 00:00:49.618958 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:00:49.715902 sudo[2268]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:00:49.716186 sudo[2268]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:00:49.720195 sudo[2268]: pam_unix(sudo:session): session closed for user root Jul 7 00:00:49.725798 sudo[2267]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 00:00:49.726096 sudo[2267]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:00:49.745171 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 00:00:49.747325 auditctl[2271]: No rules Jul 7 00:00:49.747893 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:00:49.748122 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 00:00:49.751075 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:00:49.781992 augenrules[2289]: No rules Jul 7 00:00:49.783616 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:00:49.784805 sudo[2267]: pam_unix(sudo:session): session closed for user root Jul 7 00:00:49.808080 sshd[2264]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:49.810878 systemd[1]: sshd@5-172.31.20.165:22-147.75.109.163:58036.service: Deactivated successfully. Jul 7 00:00:49.812674 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:00:49.814228 systemd-logind[1958]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:00:49.815193 systemd-logind[1958]: Removed session 6. Jul 7 00:00:49.843937 systemd[1]: Started sshd@6-172.31.20.165:22-147.75.109.163:58038.service - OpenSSH per-connection server daemon (147.75.109.163:58038). Jul 7 00:00:50.011050 sshd[2297]: Accepted publickey for core from 147.75.109.163 port 58038 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:50.012657 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:50.017603 systemd-logind[1958]: New session 7 of user core. Jul 7 00:00:50.026943 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:00:50.124122 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:00:50.124419 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:00:50.669125 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:00:50.671255 (dockerd)[2315]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:00:51.302926 dockerd[2315]: time="2025-07-07T00:00:51.302711806Z" level=info msg="Starting up" Jul 7 00:00:51.668075 dockerd[2315]: time="2025-07-07T00:00:51.667783509Z" level=info msg="Loading containers: start." Jul 7 00:00:51.801727 kernel: Initializing XFRM netlink socket Jul 7 00:00:52.784904 systemd-resolved[1886]: Clock change detected. Flushing caches. Jul 7 00:00:52.815768 (udev-worker)[2339]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:00:52.879403 systemd-networkd[1882]: docker0: Link UP Jul 7 00:00:52.893883 dockerd[2315]: time="2025-07-07T00:00:52.893838938Z" level=info msg="Loading containers: done." Jul 7 00:00:52.926909 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck559866748-merged.mount: Deactivated successfully. Jul 7 00:00:52.932141 dockerd[2315]: time="2025-07-07T00:00:52.932082715Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:00:52.932321 dockerd[2315]: time="2025-07-07T00:00:52.932209131Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 00:00:52.932357 dockerd[2315]: time="2025-07-07T00:00:52.932336299Z" level=info msg="Daemon has completed initialization" Jul 7 00:00:52.965419 dockerd[2315]: time="2025-07-07T00:00:52.964914350Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:00:52.965146 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:00:54.219061 containerd[1969]: time="2025-07-07T00:00:54.219014703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 00:00:54.786572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095677428.mount: Deactivated successfully. Jul 7 00:00:56.013530 containerd[1969]: time="2025-07-07T00:00:56.013478641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:56.015014 containerd[1969]: time="2025-07-07T00:00:56.014961538Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 7 00:00:56.015816 containerd[1969]: time="2025-07-07T00:00:56.015751789Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:56.019178 containerd[1969]: time="2025-07-07T00:00:56.019140538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:56.021029 containerd[1969]: time="2025-07-07T00:00:56.020483370Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.801415846s" Jul 7 00:00:56.021029 containerd[1969]: time="2025-07-07T00:00:56.020536315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 00:00:56.021772 containerd[1969]: time="2025-07-07T00:00:56.021579542Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 00:00:57.491645 containerd[1969]: time="2025-07-07T00:00:57.491567094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:57.492941 containerd[1969]: time="2025-07-07T00:00:57.492867268Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 7 00:00:57.494513 containerd[1969]: time="2025-07-07T00:00:57.494014216Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:57.497456 containerd[1969]: time="2025-07-07T00:00:57.497414141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:57.498615 containerd[1969]: time="2025-07-07T00:00:57.498574122Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.476961438s" Jul 7 00:00:57.498615 containerd[1969]: time="2025-07-07T00:00:57.498617182Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 00:00:57.499149 containerd[1969]: time="2025-07-07T00:00:57.499115916Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 00:00:58.720644 containerd[1969]: time="2025-07-07T00:00:58.720571816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:58.728395 containerd[1969]: time="2025-07-07T00:00:58.728140044Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 7 00:00:58.738588 containerd[1969]: time="2025-07-07T00:00:58.738458380Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:58.745260 containerd[1969]: time="2025-07-07T00:00:58.745167220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:58.746981 containerd[1969]: time="2025-07-07T00:00:58.746917100Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.247769615s" Jul 7 00:00:58.746981 containerd[1969]: time="2025-07-07T00:00:58.746960650Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 00:00:58.747718 containerd[1969]: time="2025-07-07T00:00:58.747639147Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 00:00:59.754474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount497666779.mount: Deactivated successfully. Jul 7 00:01:00.378009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:01:00.382802 containerd[1969]: time="2025-07-07T00:01:00.382748632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:00.385900 containerd[1969]: time="2025-07-07T00:01:00.385042651Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 7 00:01:00.385560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:00.388263 containerd[1969]: time="2025-07-07T00:01:00.387462328Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:00.391702 containerd[1969]: time="2025-07-07T00:01:00.391654859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:00.393272 containerd[1969]: time="2025-07-07T00:01:00.392312885Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.644633634s" Jul 7 00:01:00.393272 containerd[1969]: time="2025-07-07T00:01:00.392359778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 00:01:00.394273 containerd[1969]: time="2025-07-07T00:01:00.393881531Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:01:00.617048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:00.628816 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:00.678622 kubelet[2532]: E0707 00:01:00.678576 2532 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:00.682833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:00.683035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:00.955423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011362681.mount: Deactivated successfully. Jul 7 00:01:03.717866 containerd[1969]: time="2025-07-07T00:01:03.717802412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:03.719678 containerd[1969]: time="2025-07-07T00:01:03.719631082Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 00:01:03.722269 containerd[1969]: time="2025-07-07T00:01:03.720367926Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:03.730458 containerd[1969]: time="2025-07-07T00:01:03.730406069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:03.731692 containerd[1969]: time="2025-07-07T00:01:03.731645802Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.337710845s" Jul 7 00:01:03.731889 containerd[1969]: time="2025-07-07T00:01:03.731865923Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:01:03.733017 containerd[1969]: time="2025-07-07T00:01:03.732981003Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:01:04.229004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193393657.mount: Deactivated successfully. Jul 7 00:01:04.247494 containerd[1969]: time="2025-07-07T00:01:04.247433822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:04.251483 containerd[1969]: time="2025-07-07T00:01:04.249875886Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:01:04.253111 containerd[1969]: time="2025-07-07T00:01:04.253013286Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:04.273485 containerd[1969]: time="2025-07-07T00:01:04.273397699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:04.277994 containerd[1969]: time="2025-07-07T00:01:04.275601839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 542.583665ms" Jul 7 00:01:04.277994 containerd[1969]: time="2025-07-07T00:01:04.275639510Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:01:04.277994 containerd[1969]: time="2025-07-07T00:01:04.276681509Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 00:01:04.900984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1703022222.mount: Deactivated successfully. Jul 7 00:01:07.606522 containerd[1969]: time="2025-07-07T00:01:07.606447410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:07.609522 containerd[1969]: time="2025-07-07T00:01:07.609447029Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 7 00:01:07.617270 containerd[1969]: time="2025-07-07T00:01:07.615091095Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:07.624506 containerd[1969]: time="2025-07-07T00:01:07.624462226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:07.625660 containerd[1969]: time="2025-07-07T00:01:07.625614841Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.348905534s" Jul 7 00:01:07.625660 containerd[1969]: time="2025-07-07T00:01:07.625660082Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 00:01:10.878093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:01:10.892674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:11.202498 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:11.205454 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:01:11.205719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:11.217974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:11.246320 systemd[1]: Reloading requested from client PID 2687 ('systemctl') (unit session-7.scope)... Jul 7 00:01:11.246344 systemd[1]: Reloading... Jul 7 00:01:11.370376 zram_generator::config[2728]: No configuration found. Jul 7 00:01:11.548329 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:01:11.637548 systemd[1]: Reloading finished in 390 ms. Jul 7 00:01:11.697807 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:01:11.698148 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:01:11.698518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:11.703853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:12.143312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:12.154815 (kubelet)[2791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:01:12.237160 kubelet[2791]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:01:12.239274 kubelet[2791]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:01:12.239274 kubelet[2791]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:01:12.239274 kubelet[2791]: I0707 00:01:12.237708 2791 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:01:12.456129 kubelet[2791]: I0707 00:01:12.455994 2791 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:01:12.456129 kubelet[2791]: I0707 00:01:12.456031 2791 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:01:12.456642 kubelet[2791]: I0707 00:01:12.456604 2791 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:01:12.518874 kubelet[2791]: E0707 00:01:12.518383 2791 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.165:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:12.518874 kubelet[2791]: I0707 00:01:12.518668 2791 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:01:12.541599 kubelet[2791]: E0707 00:01:12.541563 2791 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:01:12.541805 kubelet[2791]: I0707 00:01:12.541767 2791 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:01:12.546997 kubelet[2791]: I0707 00:01:12.546950 2791 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:01:12.549705 kubelet[2791]: I0707 00:01:12.549645 2791 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:01:12.550200 kubelet[2791]: I0707 00:01:12.549703 2791 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-165","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:01:12.552720 kubelet[2791]: I0707 00:01:12.552663 2791 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:01:12.552720 kubelet[2791]: I0707 00:01:12.552710 2791 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:01:12.554894 kubelet[2791]: I0707 00:01:12.554841 2791 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:01:12.563207 kubelet[2791]: I0707 00:01:12.563160 2791 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:01:12.563207 kubelet[2791]: I0707 00:01:12.563215 2791 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:01:12.563420 kubelet[2791]: I0707 00:01:12.563261 2791 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:01:12.563420 kubelet[2791]: I0707 00:01:12.563276 2791 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:01:12.574582 kubelet[2791]: W0707 00:01:12.574278 2791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-165&limit=500&resourceVersion=0": dial tcp 172.31.20.165:6443: connect: connection refused Jul 7 00:01:12.574582 kubelet[2791]: E0707 00:01:12.574356 2791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-165&limit=500&resourceVersion=0\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:12.575046 kubelet[2791]: W0707 00:01:12.574996 2791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.165:6443: connect: connection refused Jul 7 00:01:12.575046 kubelet[2791]: E0707 00:01:12.575045 2791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:12.575148 kubelet[2791]: I0707 00:01:12.575117 2791 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:01:12.580323 kubelet[2791]: I0707 00:01:12.579206 2791 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:01:12.580323 kubelet[2791]: W0707 00:01:12.579302 2791 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:01:12.583248 kubelet[2791]: I0707 00:01:12.581522 2791 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:01:12.583248 kubelet[2791]: I0707 00:01:12.581556 2791 server.go:1287] "Started kubelet" Jul 7 00:01:12.587273 kubelet[2791]: I0707 00:01:12.585709 2791 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:01:12.591259 kubelet[2791]: I0707 00:01:12.590588 2791 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:01:12.591547 kubelet[2791]: I0707 00:01:12.591363 2791 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:01:12.594380 kubelet[2791]: I0707 00:01:12.594352 2791 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:01:12.595000 kubelet[2791]: I0707 00:01:12.594958 2791 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:01:12.598418 kubelet[2791]: E0707 00:01:12.596285 2791 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.165:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.165:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-165.184fcf15cb3e6265 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-165,UID:ip-172-31-20-165,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-165,},FirstTimestamp:2025-07-07 00:01:12.581538405 +0000 UTC m=+0.420838421,LastTimestamp:2025-07-07 00:01:12.581538405 +0000 UTC m=+0.420838421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-165,}" Jul 7 00:01:12.600665 kubelet[2791]: I0707 00:01:12.600642 2791 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:01:12.604135 kubelet[2791]: E0707 00:01:12.604104 2791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-165\" not found" Jul 7 00:01:12.604135 kubelet[2791]: I0707 00:01:12.604145 2791 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:01:12.608052 kubelet[2791]: I0707 00:01:12.604401 2791 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:01:12.608052 kubelet[2791]: I0707 00:01:12.604448 2791 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:01:12.608052 kubelet[2791]: W0707 00:01:12.604952 2791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.165:6443: connect: connection refused Jul 7 00:01:12.608052 kubelet[2791]: E0707 00:01:12.605003 2791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:12.608052 kubelet[2791]: E0707 00:01:12.605200 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-165?timeout=10s\": dial tcp 172.31.20.165:6443: connect: connection refused" interval="200ms" Jul 7 00:01:12.609284 kubelet[2791]: I0707 00:01:12.609265 2791 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:01:12.610173 kubelet[2791]: I0707 00:01:12.609415 2791 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:01:12.611735 kubelet[2791]: I0707 00:01:12.611720 2791 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:01:12.624064 kubelet[2791]: I0707 00:01:12.624013 2791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:01:12.627271 kubelet[2791]: I0707 00:01:12.627212 2791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:01:12.627778 kubelet[2791]: I0707 00:01:12.627410 2791 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:01:12.627778 kubelet[2791]: I0707 00:01:12.627444 2791 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:01:12.627778 kubelet[2791]: I0707 00:01:12.627454 2791 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:01:12.627778 kubelet[2791]: E0707 00:01:12.627516 2791 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:01:12.638611 kubelet[2791]: W0707 00:01:12.638526 2791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.165:6443: connect: connection refused Jul 7 00:01:12.638941 kubelet[2791]: E0707 00:01:12.638831 2791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:12.651852 kubelet[2791]: I0707 00:01:12.651812 2791 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:01:12.651852 kubelet[2791]: I0707 00:01:12.651831 2791 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:01:12.651852 kubelet[2791]: I0707 00:01:12.651850 2791 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:01:12.655682 kubelet[2791]: I0707 00:01:12.655640 2791 policy_none.go:49] "None policy: Start" Jul 7 00:01:12.655682 kubelet[2791]: I0707 00:01:12.655685 2791 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:01:12.655682 kubelet[2791]: I0707 00:01:12.655704 2791 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:01:12.664784 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:01:12.679032 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:01:12.683129 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:01:12.696048 kubelet[2791]: I0707 00:01:12.695895 2791 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:01:12.698092 kubelet[2791]: I0707 00:01:12.697426 2791 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:01:12.698092 kubelet[2791]: I0707 00:01:12.697457 2791 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:01:12.700986 kubelet[2791]: E0707 00:01:12.700956 2791 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:01:12.701695 kubelet[2791]: E0707 00:01:12.701193 2791 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-165\" not found" Jul 7 00:01:12.702599 kubelet[2791]: I0707 00:01:12.701813 2791 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:01:12.753117 systemd[1]: Created slice kubepods-burstable-pod6fae4e481126e4e75555ba14653787a0.slice - libcontainer container kubepods-burstable-pod6fae4e481126e4e75555ba14653787a0.slice. Jul 7 00:01:12.767181 kubelet[2791]: E0707 00:01:12.766885 2791 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-165\" not found" node="ip-172-31-20-165" Jul 7 00:01:12.770570 systemd[1]: Created slice kubepods-burstable-pode43f2bcacbb7daaa711f6dc5645daae0.slice - libcontainer container kubepods-burstable-pode43f2bcacbb7daaa711f6dc5645daae0.slice. Jul 7 00:01:12.779200 kubelet[2791]: E0707 00:01:12.778942 2791 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-165\" not found" node="ip-172-31-20-165" Jul 7 00:01:12.783315 systemd[1]: Created slice kubepods-burstable-podac5bd82af4c20c9a6cc723fd11b9625f.slice - libcontainer container kubepods-burstable-podac5bd82af4c20c9a6cc723fd11b9625f.slice. Jul 7 00:01:12.785574 kubelet[2791]: E0707 00:01:12.785537 2791 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-165\" not found" node="ip-172-31-20-165" Jul 7 00:01:12.801850 kubelet[2791]: I0707 00:01:12.801816 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-165" Jul 7 00:01:12.802483 kubelet[2791]: E0707 00:01:12.802451 2791 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.165:6443/api/v1/nodes\": dial tcp 172.31.20.165:6443: connect: connection refused" node="ip-172-31-20-165" Jul 7 00:01:12.805649 kubelet[2791]: I0707 00:01:12.805614 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e43f2bcacbb7daaa711f6dc5645daae0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-165\" (UID: \"e43f2bcacbb7daaa711f6dc5645daae0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:12.806150 kubelet[2791]: I0707 00:01:12.805848 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fae4e481126e4e75555ba14653787a0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-165\" (UID: \"6fae4e481126e4e75555ba14653787a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-165" Jul 7 00:01:12.806150 kubelet[2791]: E0707 00:01:12.805856 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-165?timeout=10s\": dial tcp 172.31.20.165:6443: connect: connection refused" interval="400ms" Jul 7 00:01:12.806150 kubelet[2791]: I0707 00:01:12.806091 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e43f2bcacbb7daaa711f6dc5645daae0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-165\" (UID: \"e43f2bcacbb7daaa711f6dc5645daae0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:12.806150 kubelet[2791]: I0707 00:01:12.806115 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e43f2bcacbb7daaa711f6dc5645daae0-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-165\" (UID: \"e43f2bcacbb7daaa711f6dc5645daae0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:12.806402 kubelet[2791]: I0707 00:01:12.806374 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e43f2bcacbb7daaa711f6dc5645daae0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-165\" (UID: \"e43f2bcacbb7daaa711f6dc5645daae0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:12.806472 kubelet[2791]: I0707 00:01:12.806419 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e43f2bcacbb7daaa711f6dc5645daae0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-165\" (UID: \"e43f2bcacbb7daaa711f6dc5645daae0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:12.806524 kubelet[2791]: I0707 00:01:12.806468 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac5bd82af4c20c9a6cc723fd11b9625f-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-165\" (UID: \"ac5bd82af4c20c9a6cc723fd11b9625f\") " pod="kube-system/kube-scheduler-ip-172-31-20-165" Jul 7 00:01:12.806524 kubelet[2791]: I0707 00:01:12.806492 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fae4e481126e4e75555ba14653787a0-ca-certs\") pod \"kube-apiserver-ip-172-31-20-165\" (UID: \"6fae4e481126e4e75555ba14653787a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-165" Jul 7 00:01:12.806615 kubelet[2791]: I0707 00:01:12.806533 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fae4e481126e4e75555ba14653787a0-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-165\" (UID: \"6fae4e481126e4e75555ba14653787a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-165" Jul 7 00:01:13.004836 kubelet[2791]: I0707 00:01:13.004711 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-165" Jul 7 00:01:13.005439 kubelet[2791]: E0707 00:01:13.005209 2791 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.165:6443/api/v1/nodes\": dial tcp 172.31.20.165:6443: connect: connection refused" node="ip-172-31-20-165" Jul 7 00:01:13.068554 containerd[1969]: time="2025-07-07T00:01:13.068492705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-165,Uid:6fae4e481126e4e75555ba14653787a0,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:13.080322 containerd[1969]: time="2025-07-07T00:01:13.080280873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-165,Uid:e43f2bcacbb7daaa711f6dc5645daae0,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:13.087565 containerd[1969]: time="2025-07-07T00:01:13.087503277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-165,Uid:ac5bd82af4c20c9a6cc723fd11b9625f,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:13.207143 kubelet[2791]: E0707 00:01:13.207041 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-165?timeout=10s\": dial tcp 172.31.20.165:6443: connect: connection refused" interval="800ms" Jul 7 00:01:13.407915 kubelet[2791]: I0707 00:01:13.407881 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-165" Jul 7 00:01:13.408402 kubelet[2791]: E0707 00:01:13.408324 2791 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.165:6443/api/v1/nodes\": dial tcp 172.31.20.165:6443: connect: connection refused" node="ip-172-31-20-165" Jul 7 00:01:13.418615 kubelet[2791]: W0707 00:01:13.418502 2791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.165:6443: connect: connection refused Jul 7 00:01:13.418615 kubelet[2791]: E0707 00:01:13.418582 2791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:13.576685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880270934.mount: Deactivated successfully. Jul 7 00:01:13.595080 containerd[1969]: time="2025-07-07T00:01:13.595016140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:01:13.597092 containerd[1969]: time="2025-07-07T00:01:13.597021055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 7 00:01:13.599502 containerd[1969]: time="2025-07-07T00:01:13.599451331Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:01:13.601507 containerd[1969]: time="2025-07-07T00:01:13.601459501Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:01:13.604971 containerd[1969]: time="2025-07-07T00:01:13.604852099Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:01:13.611273 containerd[1969]: time="2025-07-07T00:01:13.610141775Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:01:13.611273 containerd[1969]: time="2025-07-07T00:01:13.610402314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:01:13.620919 containerd[1969]: time="2025-07-07T00:01:13.618687909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:01:13.620919 containerd[1969]: time="2025-07-07T00:01:13.618976674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.620347ms" Jul 7 00:01:13.626117 containerd[1969]: time="2025-07-07T00:01:13.626058524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.300105ms" Jul 7 00:01:13.635891 containerd[1969]: time="2025-07-07T00:01:13.635841868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.27298ms" Jul 7 00:01:13.724301 kubelet[2791]: W0707 00:01:13.723700 2791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.165:6443: connect: connection refused Jul 7 00:01:13.724301 kubelet[2791]: E0707 00:01:13.723783 2791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:13.833262 containerd[1969]: time="2025-07-07T00:01:13.832915955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:13.833262 containerd[1969]: time="2025-07-07T00:01:13.832998301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:13.833262 containerd[1969]: time="2025-07-07T00:01:13.833015946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:13.833262 containerd[1969]: time="2025-07-07T00:01:13.833135615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:13.841873 containerd[1969]: time="2025-07-07T00:01:13.841189376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:13.842566 containerd[1969]: time="2025-07-07T00:01:13.842515837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:13.844809 containerd[1969]: time="2025-07-07T00:01:13.844651626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:13.850550 containerd[1969]: time="2025-07-07T00:01:13.849984912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:13.850550 containerd[1969]: time="2025-07-07T00:01:13.850072053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:13.850550 containerd[1969]: time="2025-07-07T00:01:13.850097327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:13.850550 containerd[1969]: time="2025-07-07T00:01:13.850232156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:13.850550 containerd[1969]: time="2025-07-07T00:01:13.850479943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:13.874065 systemd[1]: Started cri-containerd-2fe7af402b57cec687e86b2d22007eb0328bc54d157a62aa58365676503725cb.scope - libcontainer container 2fe7af402b57cec687e86b2d22007eb0328bc54d157a62aa58365676503725cb. Jul 7 00:01:13.902468 systemd[1]: Started cri-containerd-01f5fea1a8c062811c19c4fea4c1910dff665a0b877c1ba80001b9a45e5e574b.scope - libcontainer container 01f5fea1a8c062811c19c4fea4c1910dff665a0b877c1ba80001b9a45e5e574b. Jul 7 00:01:13.909733 systemd[1]: Started cri-containerd-6b6e0ebe7521b8ea99ac8b101ce3739b0753664e0413c3ecbbcac0175b544d3e.scope - libcontainer container 6b6e0ebe7521b8ea99ac8b101ce3739b0753664e0413c3ecbbcac0175b544d3e. Jul 7 00:01:13.920783 kubelet[2791]: W0707 00:01:13.920175 2791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-165&limit=500&resourceVersion=0": dial tcp 172.31.20.165:6443: connect: connection refused Jul 7 00:01:13.920783 kubelet[2791]: E0707 00:01:13.920538 2791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-165&limit=500&resourceVersion=0\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:14.022756 kubelet[2791]: E0707 00:01:14.022586 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-165?timeout=10s\": dial tcp 172.31.20.165:6443: connect: connection refused" interval="1.6s" Jul 7 00:01:14.024423 containerd[1969]: time="2025-07-07T00:01:14.024371836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-165,Uid:6fae4e481126e4e75555ba14653787a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fe7af402b57cec687e86b2d22007eb0328bc54d157a62aa58365676503725cb\"" Jul 7 00:01:14.025573 containerd[1969]: time="2025-07-07T00:01:14.025367563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-165,Uid:ac5bd82af4c20c9a6cc723fd11b9625f,Namespace:kube-system,Attempt:0,} returns sandbox id \"01f5fea1a8c062811c19c4fea4c1910dff665a0b877c1ba80001b9a45e5e574b\"" Jul 7 00:01:14.031974 containerd[1969]: time="2025-07-07T00:01:14.031865882Z" level=info msg="CreateContainer within sandbox \"2fe7af402b57cec687e86b2d22007eb0328bc54d157a62aa58365676503725cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:01:14.033438 containerd[1969]: time="2025-07-07T00:01:14.033360632Z" level=info msg="CreateContainer within sandbox \"01f5fea1a8c062811c19c4fea4c1910dff665a0b877c1ba80001b9a45e5e574b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:01:14.034228 containerd[1969]: time="2025-07-07T00:01:14.034140090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-165,Uid:e43f2bcacbb7daaa711f6dc5645daae0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b6e0ebe7521b8ea99ac8b101ce3739b0753664e0413c3ecbbcac0175b544d3e\"" Jul 7 00:01:14.036754 containerd[1969]: time="2025-07-07T00:01:14.036716907Z" level=info msg="CreateContainer within sandbox \"6b6e0ebe7521b8ea99ac8b101ce3739b0753664e0413c3ecbbcac0175b544d3e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:01:14.095013 containerd[1969]: time="2025-07-07T00:01:14.094607247Z" level=info msg="CreateContainer within sandbox \"01f5fea1a8c062811c19c4fea4c1910dff665a0b877c1ba80001b9a45e5e574b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"db282ef5bc093132b558fa35afb751390bc2dceb27cddb3c93f93a4c090f4d69\"" Jul 7 00:01:14.098642 containerd[1969]: time="2025-07-07T00:01:14.098416598Z" level=info msg="CreateContainer within sandbox \"6b6e0ebe7521b8ea99ac8b101ce3739b0753664e0413c3ecbbcac0175b544d3e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df88e2a6f3b784ac3540936b6d041696f4b8a0238e71ea5c2972f597aa6c8e1a\"" Jul 7 00:01:14.098642 containerd[1969]: time="2025-07-07T00:01:14.098523572Z" level=info msg="StartContainer for \"db282ef5bc093132b558fa35afb751390bc2dceb27cddb3c93f93a4c090f4d69\"" Jul 7 00:01:14.098642 containerd[1969]: time="2025-07-07T00:01:14.098551788Z" level=info msg="CreateContainer within sandbox \"2fe7af402b57cec687e86b2d22007eb0328bc54d157a62aa58365676503725cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ad87d1d83f5a47e2cfb89d4d14d4d128ceb09606efa62680c2e2aaff664bba47\"" Jul 7 00:01:14.101032 containerd[1969]: time="2025-07-07T00:01:14.099629800Z" level=info msg="StartContainer for \"df88e2a6f3b784ac3540936b6d041696f4b8a0238e71ea5c2972f597aa6c8e1a\"" Jul 7 00:01:14.111031 containerd[1969]: time="2025-07-07T00:01:14.110898895Z" level=info msg="StartContainer for \"ad87d1d83f5a47e2cfb89d4d14d4d128ceb09606efa62680c2e2aaff664bba47\"" Jul 7 00:01:14.115300 kubelet[2791]: W0707 00:01:14.115185 2791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.165:6443: connect: connection refused Jul 7 00:01:14.115300 kubelet[2791]: E0707 00:01:14.115248 2791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:14.151519 systemd[1]: Started cri-containerd-db282ef5bc093132b558fa35afb751390bc2dceb27cddb3c93f93a4c090f4d69.scope - libcontainer container db282ef5bc093132b558fa35afb751390bc2dceb27cddb3c93f93a4c090f4d69. Jul 7 00:01:14.171475 systemd[1]: Started cri-containerd-ad87d1d83f5a47e2cfb89d4d14d4d128ceb09606efa62680c2e2aaff664bba47.scope - libcontainer container ad87d1d83f5a47e2cfb89d4d14d4d128ceb09606efa62680c2e2aaff664bba47. Jul 7 00:01:14.172979 systemd[1]: Started cri-containerd-df88e2a6f3b784ac3540936b6d041696f4b8a0238e71ea5c2972f597aa6c8e1a.scope - libcontainer container df88e2a6f3b784ac3540936b6d041696f4b8a0238e71ea5c2972f597aa6c8e1a. Jul 7 00:01:14.214876 kubelet[2791]: I0707 00:01:14.213698 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-165" Jul 7 00:01:14.214876 kubelet[2791]: E0707 00:01:14.214625 2791 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.165:6443/api/v1/nodes\": dial tcp 172.31.20.165:6443: connect: connection refused" node="ip-172-31-20-165" Jul 7 00:01:14.269404 containerd[1969]: time="2025-07-07T00:01:14.269338371Z" level=info msg="StartContainer for \"df88e2a6f3b784ac3540936b6d041696f4b8a0238e71ea5c2972f597aa6c8e1a\" returns successfully" Jul 7 00:01:14.278660 containerd[1969]: time="2025-07-07T00:01:14.277446089Z" level=info msg="StartContainer for \"db282ef5bc093132b558fa35afb751390bc2dceb27cddb3c93f93a4c090f4d69\" returns successfully" Jul 7 00:01:14.285295 containerd[1969]: time="2025-07-07T00:01:14.285229871Z" level=info msg="StartContainer for \"ad87d1d83f5a47e2cfb89d4d14d4d128ceb09606efa62680c2e2aaff664bba47\" returns successfully" Jul 7 00:01:14.601396 kubelet[2791]: E0707 00:01:14.601269 2791 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.165:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:14.654295 kubelet[2791]: E0707 00:01:14.654262 2791 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-165\" not found" node="ip-172-31-20-165" Jul 7 00:01:14.656429 kubelet[2791]: E0707 00:01:14.656399 2791 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-165\" not found" node="ip-172-31-20-165" Jul 7 00:01:14.659675 kubelet[2791]: E0707 00:01:14.659644 2791 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-165\" not found" node="ip-172-31-20-165" Jul 7 00:01:15.605312 kubelet[2791]: W0707 00:01:15.605143 2791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.165:6443: connect: connection refused Jul 7 00:01:15.605312 kubelet[2791]: E0707 00:01:15.605189 2791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:15.623848 kubelet[2791]: E0707 00:01:15.623789 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-165?timeout=10s\": dial tcp 172.31.20.165:6443: connect: connection refused" interval="3.2s" Jul 7 00:01:15.660887 kubelet[2791]: E0707 00:01:15.660737 2791 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-165\" not found" node="ip-172-31-20-165" Jul 7 00:01:15.661706 kubelet[2791]: E0707 00:01:15.661663 2791 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-165\" not found" node="ip-172-31-20-165" Jul 7 00:01:15.777044 kubelet[2791]: W0707 00:01:15.776960 2791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.165:6443: connect: connection refused Jul 7 00:01:15.777044 kubelet[2791]: E0707 00:01:15.777006 2791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.165:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:15.817330 kubelet[2791]: I0707 00:01:15.817292 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-165" Jul 7 00:01:15.817851 kubelet[2791]: E0707 00:01:15.817701 2791 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.165:6443/api/v1/nodes\": dial tcp 172.31.20.165:6443: connect: connection refused" node="ip-172-31-20-165" Jul 7 00:01:16.627651 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 00:01:16.767336 kubelet[2791]: E0707 00:01:16.765910 2791 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-165\" not found" node="ip-172-31-20-165" Jul 7 00:01:17.578838 kubelet[2791]: I0707 00:01:17.578793 2791 apiserver.go:52] "Watching apiserver" Jul 7 00:01:17.604576 kubelet[2791]: I0707 00:01:17.604543 2791 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:01:17.886754 kubelet[2791]: E0707 00:01:17.886696 2791 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-20-165" not found Jul 7 00:01:18.235332 kubelet[2791]: E0707 00:01:18.235163 2791 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-20-165" not found Jul 7 00:01:18.683954 kubelet[2791]: E0707 00:01:18.683914 2791 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-20-165" not found Jul 7 00:01:18.828392 kubelet[2791]: E0707 00:01:18.828351 2791 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-165\" not found" node="ip-172-31-20-165" Jul 7 00:01:19.024563 kubelet[2791]: I0707 00:01:19.024222 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-165" Jul 7 00:01:19.037513 kubelet[2791]: I0707 00:01:19.037189 2791 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-165" Jul 7 00:01:19.105509 kubelet[2791]: I0707 00:01:19.105450 2791 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-165" Jul 7 00:01:19.133341 kubelet[2791]: I0707 00:01:19.131378 2791 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:19.148460 kubelet[2791]: I0707 00:01:19.148427 2791 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-165" Jul 7 00:01:19.636076 systemd[1]: Reloading requested from client PID 3064 ('systemctl') (unit session-7.scope)... Jul 7 00:01:19.636097 systemd[1]: Reloading... Jul 7 00:01:19.748270 zram_generator::config[3104]: No configuration found. Jul 7 00:01:19.926906 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:01:20.107718 systemd[1]: Reloading finished in 471 ms. Jul 7 00:01:20.162863 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:20.180463 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:01:20.180679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:20.188626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:20.461080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:20.473814 (kubelet)[3164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:01:20.546688 kubelet[3164]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:01:20.546688 kubelet[3164]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:01:20.546688 kubelet[3164]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:01:20.546688 kubelet[3164]: I0707 00:01:20.545689 3164 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:01:20.554531 kubelet[3164]: I0707 00:01:20.554499 3164 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:01:20.555263 kubelet[3164]: I0707 00:01:20.554676 3164 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:01:20.555263 kubelet[3164]: I0707 00:01:20.554964 3164 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:01:20.557643 kubelet[3164]: I0707 00:01:20.557612 3164 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:01:20.565451 kubelet[3164]: I0707 00:01:20.565329 3164 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:01:20.571663 kubelet[3164]: E0707 00:01:20.571628 3164 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:01:20.571795 kubelet[3164]: I0707 00:01:20.571785 3164 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:01:20.575102 kubelet[3164]: I0707 00:01:20.575074 3164 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:01:20.575561 kubelet[3164]: I0707 00:01:20.575530 3164 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:01:20.575965 kubelet[3164]: I0707 00:01:20.575642 3164 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-165","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:01:20.576108 kubelet[3164]: I0707 00:01:20.576096 3164 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:01:20.576173 kubelet[3164]: I0707 00:01:20.576166 3164 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:01:20.576275 kubelet[3164]: I0707 00:01:20.576267 3164 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:01:20.576553 kubelet[3164]: I0707 00:01:20.576475 3164 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:01:20.579278 kubelet[3164]: I0707 00:01:20.577614 3164 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:01:20.579278 kubelet[3164]: I0707 00:01:20.577652 3164 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:01:20.579278 kubelet[3164]: I0707 00:01:20.577667 3164 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:01:20.591511 kubelet[3164]: I0707 00:01:20.591467 3164 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:01:20.592330 kubelet[3164]: I0707 00:01:20.592309 3164 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:01:20.592938 kubelet[3164]: I0707 00:01:20.592918 3164 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:01:20.593070 kubelet[3164]: I0707 00:01:20.593060 3164 server.go:1287] "Started kubelet" Jul 7 00:01:20.595807 kubelet[3164]: I0707 00:01:20.595785 3164 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:01:20.605005 kubelet[3164]: I0707 00:01:20.604952 3164 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:01:20.607850 kubelet[3164]: I0707 00:01:20.607824 3164 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:01:20.613501 kubelet[3164]: I0707 00:01:20.612493 3164 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:01:20.613501 kubelet[3164]: I0707 00:01:20.612764 3164 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:01:20.613501 kubelet[3164]: I0707 00:01:20.613409 3164 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:01:20.614438 kubelet[3164]: I0707 00:01:20.614401 3164 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:01:20.614791 kubelet[3164]: E0707 00:01:20.614772 3164 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:01:20.615567 kubelet[3164]: I0707 00:01:20.615534 3164 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:01:20.619074 kubelet[3164]: I0707 00:01:20.619052 3164 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:01:20.619353 kubelet[3164]: I0707 00:01:20.619334 3164 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:01:20.621349 kubelet[3164]: I0707 00:01:20.621303 3164 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:01:20.626355 kubelet[3164]: I0707 00:01:20.626316 3164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:01:20.628381 kubelet[3164]: I0707 00:01:20.628351 3164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:01:20.628558 kubelet[3164]: I0707 00:01:20.628546 3164 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:01:20.628666 kubelet[3164]: I0707 00:01:20.628652 3164 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:01:20.628738 kubelet[3164]: I0707 00:01:20.628731 3164 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:01:20.628873 kubelet[3164]: E0707 00:01:20.628853 3164 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:01:20.640504 kubelet[3164]: I0707 00:01:20.640477 3164 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:01:20.704304 kubelet[3164]: I0707 00:01:20.703560 3164 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:01:20.704304 kubelet[3164]: I0707 00:01:20.703575 3164 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:01:20.704304 kubelet[3164]: I0707 00:01:20.703593 3164 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:01:20.704304 kubelet[3164]: I0707 00:01:20.703765 3164 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:01:20.704304 kubelet[3164]: I0707 00:01:20.703781 3164 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:01:20.704304 kubelet[3164]: I0707 00:01:20.703801 3164 policy_none.go:49] "None policy: Start" Jul 7 00:01:20.704304 kubelet[3164]: I0707 00:01:20.703810 3164 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:01:20.704304 kubelet[3164]: I0707 00:01:20.703819 3164 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:01:20.704304 kubelet[3164]: I0707 00:01:20.703929 3164 state_mem.go:75] "Updated machine memory state" Jul 7 00:01:20.709279 kubelet[3164]: I0707 00:01:20.709181 3164 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:01:20.712385 kubelet[3164]: I0707 00:01:20.711253 3164 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:01:20.712385 kubelet[3164]: I0707 00:01:20.711277 3164 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:01:20.712385 kubelet[3164]: I0707 00:01:20.711749 3164 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:01:20.718858 kubelet[3164]: E0707 00:01:20.717575 3164 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:01:20.732989 kubelet[3164]: I0707 00:01:20.732956 3164 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-165" Jul 7 00:01:20.736425 kubelet[3164]: I0707 00:01:20.736375 3164 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-165" Jul 7 00:01:20.737440 kubelet[3164]: I0707 00:01:20.736232 3164 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:20.752940 kubelet[3164]: E0707 00:01:20.751647 3164 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-165\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-165" Jul 7 00:01:20.756251 kubelet[3164]: E0707 00:01:20.756054 3164 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-165\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-165" Jul 7 00:01:20.759278 kubelet[3164]: E0707 00:01:20.758638 3164 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-165\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:20.820179 kubelet[3164]: I0707 00:01:20.819943 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e43f2bcacbb7daaa711f6dc5645daae0-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-165\" (UID: \"e43f2bcacbb7daaa711f6dc5645daae0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:20.820179 kubelet[3164]: I0707 00:01:20.819985 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e43f2bcacbb7daaa711f6dc5645daae0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-165\" (UID: \"e43f2bcacbb7daaa711f6dc5645daae0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:20.820179 kubelet[3164]: I0707 00:01:20.820010 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e43f2bcacbb7daaa711f6dc5645daae0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-165\" (UID: \"e43f2bcacbb7daaa711f6dc5645daae0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:20.820179 kubelet[3164]: I0707 00:01:20.820028 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e43f2bcacbb7daaa711f6dc5645daae0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-165\" (UID: \"e43f2bcacbb7daaa711f6dc5645daae0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:20.820179 kubelet[3164]: I0707 00:01:20.820047 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac5bd82af4c20c9a6cc723fd11b9625f-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-165\" (UID: \"ac5bd82af4c20c9a6cc723fd11b9625f\") " pod="kube-system/kube-scheduler-ip-172-31-20-165" Jul 7 00:01:20.820466 kubelet[3164]: I0707 00:01:20.820061 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fae4e481126e4e75555ba14653787a0-ca-certs\") pod \"kube-apiserver-ip-172-31-20-165\" (UID: \"6fae4e481126e4e75555ba14653787a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-165" Jul 7 00:01:20.820466 kubelet[3164]: I0707 00:01:20.820077 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e43f2bcacbb7daaa711f6dc5645daae0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-165\" (UID: \"e43f2bcacbb7daaa711f6dc5645daae0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-165" Jul 7 00:01:20.820466 kubelet[3164]: I0707 00:01:20.820093 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fae4e481126e4e75555ba14653787a0-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-165\" (UID: \"6fae4e481126e4e75555ba14653787a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-165" Jul 7 00:01:20.820466 kubelet[3164]: I0707 00:01:20.820111 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fae4e481126e4e75555ba14653787a0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-165\" (UID: \"6fae4e481126e4e75555ba14653787a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-165" Jul 7 00:01:20.838252 kubelet[3164]: I0707 00:01:20.838191 3164 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-165" Jul 7 00:01:20.849627 kubelet[3164]: I0707 00:01:20.848973 3164 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-165" Jul 7 00:01:20.849627 kubelet[3164]: I0707 00:01:20.849090 3164 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-165" Jul 7 00:01:21.580420 kubelet[3164]: I0707 00:01:21.580279 3164 apiserver.go:52] "Watching apiserver" Jul 7 00:01:21.616552 kubelet[3164]: I0707 00:01:21.616513 3164 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:01:21.679012 kubelet[3164]: I0707 00:01:21.678935 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-165" podStartSLOduration=2.678915318 podStartE2EDuration="2.678915318s" podCreationTimestamp="2025-07-07 00:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:01:21.66112901 +0000 UTC m=+1.180925596" watchObservedRunningTime="2025-07-07 00:01:21.678915318 +0000 UTC m=+1.198711905" Jul 7 00:01:21.704188 kubelet[3164]: I0707 00:01:21.703777 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-165" podStartSLOduration=2.703754946 podStartE2EDuration="2.703754946s" podCreationTimestamp="2025-07-07 00:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:01:21.679836223 +0000 UTC m=+1.199632810" watchObservedRunningTime="2025-07-07 00:01:21.703754946 +0000 UTC m=+1.223551533" Jul 7 00:01:21.722619 kubelet[3164]: I0707 00:01:21.722353 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-165" podStartSLOduration=2.7223321499999997 podStartE2EDuration="2.72233215s" podCreationTimestamp="2025-07-07 00:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:01:21.704042717 +0000 UTC m=+1.223839303" watchObservedRunningTime="2025-07-07 00:01:21.72233215 +0000 UTC m=+1.242128734" Jul 7 00:01:26.761795 kubelet[3164]: I0707 00:01:26.761738 3164 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:01:26.763254 kubelet[3164]: I0707 00:01:26.762872 3164 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:01:26.763305 containerd[1969]: time="2025-07-07T00:01:26.762674881Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:01:27.798129 systemd[1]: Created slice kubepods-besteffort-podc5f719e1_4f00_42f5_b461_fbbec569f1d2.slice - libcontainer container kubepods-besteffort-podc5f719e1_4f00_42f5_b461_fbbec569f1d2.slice. Jul 7 00:01:27.873109 kubelet[3164]: I0707 00:01:27.873071 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5f719e1-4f00-42f5-b461-fbbec569f1d2-kube-proxy\") pod \"kube-proxy-8rpkf\" (UID: \"c5f719e1-4f00-42f5-b461-fbbec569f1d2\") " pod="kube-system/kube-proxy-8rpkf" Jul 7 00:01:27.873109 kubelet[3164]: I0707 00:01:27.873122 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5f719e1-4f00-42f5-b461-fbbec569f1d2-xtables-lock\") pod \"kube-proxy-8rpkf\" (UID: \"c5f719e1-4f00-42f5-b461-fbbec569f1d2\") " pod="kube-system/kube-proxy-8rpkf" Jul 7 00:01:27.873109 kubelet[3164]: I0707 00:01:27.873152 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5f719e1-4f00-42f5-b461-fbbec569f1d2-lib-modules\") pod \"kube-proxy-8rpkf\" (UID: \"c5f719e1-4f00-42f5-b461-fbbec569f1d2\") " pod="kube-system/kube-proxy-8rpkf" Jul 7 00:01:27.873109 kubelet[3164]: I0707 00:01:27.873194 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7blwx\" (UniqueName: \"kubernetes.io/projected/c5f719e1-4f00-42f5-b461-fbbec569f1d2-kube-api-access-7blwx\") pod \"kube-proxy-8rpkf\" (UID: \"c5f719e1-4f00-42f5-b461-fbbec569f1d2\") " pod="kube-system/kube-proxy-8rpkf" Jul 7 00:01:27.922071 systemd[1]: Created slice kubepods-besteffort-podc03dc967_7a32_476e_8c39_68e044a7f679.slice - libcontainer container kubepods-besteffort-podc03dc967_7a32_476e_8c39_68e044a7f679.slice. Jul 7 00:01:27.973995 kubelet[3164]: I0707 00:01:27.973394 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl98j\" (UniqueName: \"kubernetes.io/projected/c03dc967-7a32-476e-8c39-68e044a7f679-kube-api-access-xl98j\") pod \"tigera-operator-747864d56d-lmh6g\" (UID: \"c03dc967-7a32-476e-8c39-68e044a7f679\") " pod="tigera-operator/tigera-operator-747864d56d-lmh6g" Jul 7 00:01:27.973995 kubelet[3164]: I0707 00:01:27.973459 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c03dc967-7a32-476e-8c39-68e044a7f679-var-lib-calico\") pod \"tigera-operator-747864d56d-lmh6g\" (UID: \"c03dc967-7a32-476e-8c39-68e044a7f679\") " pod="tigera-operator/tigera-operator-747864d56d-lmh6g" Jul 7 00:01:28.112765 containerd[1969]: time="2025-07-07T00:01:28.112475818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8rpkf,Uid:c5f719e1-4f00-42f5-b461-fbbec569f1d2,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:28.155807 containerd[1969]: time="2025-07-07T00:01:28.155684442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:28.156272 containerd[1969]: time="2025-07-07T00:01:28.156089309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:28.156662 containerd[1969]: time="2025-07-07T00:01:28.156569733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:28.157113 containerd[1969]: time="2025-07-07T00:01:28.156987374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:28.183489 systemd[1]: Started cri-containerd-d46a6dabe4712bc225082cd224fe0f783eba8d54c519e7b7a240d5c5897b5f2c.scope - libcontainer container d46a6dabe4712bc225082cd224fe0f783eba8d54c519e7b7a240d5c5897b5f2c. Jul 7 00:01:28.210745 containerd[1969]: time="2025-07-07T00:01:28.210531033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8rpkf,Uid:c5f719e1-4f00-42f5-b461-fbbec569f1d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d46a6dabe4712bc225082cd224fe0f783eba8d54c519e7b7a240d5c5897b5f2c\"" Jul 7 00:01:28.215201 containerd[1969]: time="2025-07-07T00:01:28.215060825Z" level=info msg="CreateContainer within sandbox \"d46a6dabe4712bc225082cd224fe0f783eba8d54c519e7b7a240d5c5897b5f2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:01:28.226785 containerd[1969]: time="2025-07-07T00:01:28.226681643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lmh6g,Uid:c03dc967-7a32-476e-8c39-68e044a7f679,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:01:28.266767 containerd[1969]: time="2025-07-07T00:01:28.266547104Z" level=info msg="CreateContainer within sandbox \"d46a6dabe4712bc225082cd224fe0f783eba8d54c519e7b7a240d5c5897b5f2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3d01ad82bcaf3161891721b1a8439050be9032dbdc93009ec49d39ab976306c7\"" Jul 7 00:01:28.268674 containerd[1969]: time="2025-07-07T00:01:28.268465847Z" level=info msg="StartContainer for \"3d01ad82bcaf3161891721b1a8439050be9032dbdc93009ec49d39ab976306c7\"" Jul 7 00:01:28.278675 containerd[1969]: time="2025-07-07T00:01:28.278130837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:28.279780 containerd[1969]: time="2025-07-07T00:01:28.279514664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:28.280374 containerd[1969]: time="2025-07-07T00:01:28.280315808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:28.280779 containerd[1969]: time="2025-07-07T00:01:28.280721953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:28.312501 systemd[1]: Started cri-containerd-b8641c27fbea3704dbdb208d7dfdf913a6a522f6d3172d2f8d52853b6f385c04.scope - libcontainer container b8641c27fbea3704dbdb208d7dfdf913a6a522f6d3172d2f8d52853b6f385c04. Jul 7 00:01:28.317266 systemd[1]: Started cri-containerd-3d01ad82bcaf3161891721b1a8439050be9032dbdc93009ec49d39ab976306c7.scope - libcontainer container 3d01ad82bcaf3161891721b1a8439050be9032dbdc93009ec49d39ab976306c7. Jul 7 00:01:28.370118 containerd[1969]: time="2025-07-07T00:01:28.369997849Z" level=info msg="StartContainer for \"3d01ad82bcaf3161891721b1a8439050be9032dbdc93009ec49d39ab976306c7\" returns successfully" Jul 7 00:01:28.391229 containerd[1969]: time="2025-07-07T00:01:28.390893734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lmh6g,Uid:c03dc967-7a32-476e-8c39-68e044a7f679,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b8641c27fbea3704dbdb208d7dfdf913a6a522f6d3172d2f8d52853b6f385c04\"" Jul 7 00:01:28.393644 containerd[1969]: time="2025-07-07T00:01:28.393495024Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:01:28.725546 kubelet[3164]: I0707 00:01:28.725349 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8rpkf" podStartSLOduration=1.725331459 podStartE2EDuration="1.725331459s" podCreationTimestamp="2025-07-07 00:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:01:28.724369307 +0000 UTC m=+8.244165893" watchObservedRunningTime="2025-07-07 00:01:28.725331459 +0000 UTC m=+8.245128046" Jul 7 00:01:29.792813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740478533.mount: Deactivated successfully. Jul 7 00:01:30.700313 containerd[1969]: time="2025-07-07T00:01:30.700230326Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:30.703257 containerd[1969]: time="2025-07-07T00:01:30.702162536Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:01:30.703257 containerd[1969]: time="2025-07-07T00:01:30.702624878Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:30.706803 containerd[1969]: time="2025-07-07T00:01:30.706760932Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:30.708322 containerd[1969]: time="2025-07-07T00:01:30.708283512Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.314748397s" Jul 7 00:01:30.708489 containerd[1969]: time="2025-07-07T00:01:30.708468218Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:01:30.716344 containerd[1969]: time="2025-07-07T00:01:30.713467465Z" level=info msg="CreateContainer within sandbox \"b8641c27fbea3704dbdb208d7dfdf913a6a522f6d3172d2f8d52853b6f385c04\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:01:30.740606 containerd[1969]: time="2025-07-07T00:01:30.740555429Z" level=info msg="CreateContainer within sandbox \"b8641c27fbea3704dbdb208d7dfdf913a6a522f6d3172d2f8d52853b6f385c04\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161\"" Jul 7 00:01:30.741433 containerd[1969]: time="2025-07-07T00:01:30.741310729Z" level=info msg="StartContainer for \"8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161\"" Jul 7 00:01:30.784498 systemd[1]: Started cri-containerd-8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161.scope - libcontainer container 8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161. Jul 7 00:01:30.828342 containerd[1969]: time="2025-07-07T00:01:30.827472476Z" level=info msg="StartContainer for \"8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161\" returns successfully" Jul 7 00:01:31.010847 update_engine[1959]: I20250707 00:01:31.010653 1959 update_attempter.cc:509] Updating boot flags... Jul 7 00:01:31.106396 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3512) Jul 7 00:01:36.158305 sudo[2300]: pam_unix(sudo:session): session closed for user root Jul 7 00:01:36.185132 sshd[2297]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:36.190033 systemd[1]: sshd@6-172.31.20.165:22-147.75.109.163:58038.service: Deactivated successfully. Jul 7 00:01:36.196089 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:01:36.197381 systemd[1]: session-7.scope: Consumed 5.666s CPU time, 142.3M memory peak, 0B memory swap peak. Jul 7 00:01:36.200446 systemd-logind[1958]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:01:36.202638 systemd-logind[1958]: Removed session 7. Jul 7 00:01:40.962164 kubelet[3164]: I0707 00:01:40.962068 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-lmh6g" podStartSLOduration=11.644521837 podStartE2EDuration="13.962043769s" podCreationTimestamp="2025-07-07 00:01:27 +0000 UTC" firstStartedPulling="2025-07-07 00:01:28.392631233 +0000 UTC m=+7.912427799" lastFinishedPulling="2025-07-07 00:01:30.710153163 +0000 UTC m=+10.229949731" observedRunningTime="2025-07-07 00:01:31.712805495 +0000 UTC m=+11.232602081" watchObservedRunningTime="2025-07-07 00:01:40.962043769 +0000 UTC m=+20.481840356" Jul 7 00:01:40.979199 systemd[1]: Created slice kubepods-besteffort-pod1ada1ab9_f599_4b84_806d_07b74fd3dd39.slice - libcontainer container kubepods-besteffort-pod1ada1ab9_f599_4b84_806d_07b74fd3dd39.slice. Jul 7 00:01:41.072933 kubelet[3164]: I0707 00:01:41.072883 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms5zt\" (UniqueName: \"kubernetes.io/projected/1ada1ab9-f599-4b84-806d-07b74fd3dd39-kube-api-access-ms5zt\") pod \"calico-typha-589bfc449c-xll82\" (UID: \"1ada1ab9-f599-4b84-806d-07b74fd3dd39\") " pod="calico-system/calico-typha-589bfc449c-xll82" Jul 7 00:01:41.073076 kubelet[3164]: I0707 00:01:41.072950 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1ada1ab9-f599-4b84-806d-07b74fd3dd39-typha-certs\") pod \"calico-typha-589bfc449c-xll82\" (UID: \"1ada1ab9-f599-4b84-806d-07b74fd3dd39\") " pod="calico-system/calico-typha-589bfc449c-xll82" Jul 7 00:01:41.073076 kubelet[3164]: I0707 00:01:41.072974 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ada1ab9-f599-4b84-806d-07b74fd3dd39-tigera-ca-bundle\") pod \"calico-typha-589bfc449c-xll82\" (UID: \"1ada1ab9-f599-4b84-806d-07b74fd3dd39\") " pod="calico-system/calico-typha-589bfc449c-xll82" Jul 7 00:01:41.290056 containerd[1969]: time="2025-07-07T00:01:41.289172322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-589bfc449c-xll82,Uid:1ada1ab9-f599-4b84-806d-07b74fd3dd39,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:41.332474 containerd[1969]: time="2025-07-07T00:01:41.332370408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:41.332990 containerd[1969]: time="2025-07-07T00:01:41.332713395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:41.332990 containerd[1969]: time="2025-07-07T00:01:41.332752256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:41.332990 containerd[1969]: time="2025-07-07T00:01:41.332867273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:41.390475 systemd[1]: Created slice kubepods-besteffort-podc01bfcc3_e3b3_4132_9113_040987f8f501.slice - libcontainer container kubepods-besteffort-podc01bfcc3_e3b3_4132_9113_040987f8f501.slice. Jul 7 00:01:41.434184 systemd[1]: Started cri-containerd-53a6ca0feae231b163b7ea0af97ce75ac30efb15f94f3b298ca453e991d93135.scope - libcontainer container 53a6ca0feae231b163b7ea0af97ce75ac30efb15f94f3b298ca453e991d93135. Jul 7 00:01:41.476422 kubelet[3164]: I0707 00:01:41.476369 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c01bfcc3-e3b3-4132-9113-040987f8f501-flexvol-driver-host\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.476583 kubelet[3164]: I0707 00:01:41.476431 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c01bfcc3-e3b3-4132-9113-040987f8f501-xtables-lock\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.476583 kubelet[3164]: I0707 00:01:41.476456 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c01bfcc3-e3b3-4132-9113-040987f8f501-tigera-ca-bundle\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.476583 kubelet[3164]: I0707 00:01:41.476480 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c01bfcc3-e3b3-4132-9113-040987f8f501-cni-log-dir\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.476583 kubelet[3164]: I0707 00:01:41.476503 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c01bfcc3-e3b3-4132-9113-040987f8f501-cni-bin-dir\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.476583 kubelet[3164]: I0707 00:01:41.476522 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c01bfcc3-e3b3-4132-9113-040987f8f501-lib-modules\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.476796 kubelet[3164]: I0707 00:01:41.476544 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4pjz\" (UniqueName: \"kubernetes.io/projected/c01bfcc3-e3b3-4132-9113-040987f8f501-kube-api-access-z4pjz\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.476796 kubelet[3164]: I0707 00:01:41.476568 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c01bfcc3-e3b3-4132-9113-040987f8f501-cni-net-dir\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.476796 kubelet[3164]: I0707 00:01:41.476593 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c01bfcc3-e3b3-4132-9113-040987f8f501-node-certs\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.476796 kubelet[3164]: I0707 00:01:41.476617 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c01bfcc3-e3b3-4132-9113-040987f8f501-var-run-calico\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.476796 kubelet[3164]: I0707 00:01:41.476653 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c01bfcc3-e3b3-4132-9113-040987f8f501-policysync\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.476995 kubelet[3164]: I0707 00:01:41.476677 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c01bfcc3-e3b3-4132-9113-040987f8f501-var-lib-calico\") pod \"calico-node-wr85v\" (UID: \"c01bfcc3-e3b3-4132-9113-040987f8f501\") " pod="calico-system/calico-node-wr85v" Jul 7 00:01:41.588336 kubelet[3164]: E0707 00:01:41.586796 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.588336 kubelet[3164]: W0707 00:01:41.586825 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.588995 kubelet[3164]: E0707 00:01:41.588928 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.600656 kubelet[3164]: E0707 00:01:41.600555 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.600656 kubelet[3164]: W0707 00:01:41.600582 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.600656 kubelet[3164]: E0707 00:01:41.600608 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.661092 kubelet[3164]: E0707 00:01:41.660220 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prlf8" podUID="15afd7c1-8f7a-46ff-bed1-960e12f70585" Jul 7 00:01:41.697190 containerd[1969]: time="2025-07-07T00:01:41.697126199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wr85v,Uid:c01bfcc3-e3b3-4132-9113-040987f8f501,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:41.725601 containerd[1969]: time="2025-07-07T00:01:41.725446159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-589bfc449c-xll82,Uid:1ada1ab9-f599-4b84-806d-07b74fd3dd39,Namespace:calico-system,Attempt:0,} returns sandbox id \"53a6ca0feae231b163b7ea0af97ce75ac30efb15f94f3b298ca453e991d93135\"" Jul 7 00:01:41.730030 containerd[1969]: time="2025-07-07T00:01:41.729559995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:01:41.749810 kubelet[3164]: E0707 00:01:41.749501 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.749810 kubelet[3164]: W0707 00:01:41.749530 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.749810 kubelet[3164]: E0707 00:01:41.749557 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.751557 kubelet[3164]: E0707 00:01:41.750088 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.751557 kubelet[3164]: W0707 00:01:41.750106 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.751557 kubelet[3164]: E0707 00:01:41.750126 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.751557 kubelet[3164]: E0707 00:01:41.751484 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.751557 kubelet[3164]: W0707 00:01:41.751534 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.751557 kubelet[3164]: E0707 00:01:41.751556 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.751978 kubelet[3164]: E0707 00:01:41.751938 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.751978 kubelet[3164]: W0707 00:01:41.751952 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.751978 kubelet[3164]: E0707 00:01:41.751967 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.752827 kubelet[3164]: E0707 00:01:41.752326 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.752827 kubelet[3164]: W0707 00:01:41.752341 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.752827 kubelet[3164]: E0707 00:01:41.752355 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.752827 kubelet[3164]: E0707 00:01:41.752617 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.752827 kubelet[3164]: W0707 00:01:41.752628 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.752827 kubelet[3164]: E0707 00:01:41.752642 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.753165 kubelet[3164]: E0707 00:01:41.752872 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.753165 kubelet[3164]: W0707 00:01:41.752882 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.753165 kubelet[3164]: E0707 00:01:41.752897 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.753865 kubelet[3164]: E0707 00:01:41.753399 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.753865 kubelet[3164]: W0707 00:01:41.753413 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.753865 kubelet[3164]: E0707 00:01:41.753427 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.753865 kubelet[3164]: E0707 00:01:41.753668 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.753865 kubelet[3164]: W0707 00:01:41.753679 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.753865 kubelet[3164]: E0707 00:01:41.753691 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.755185 kubelet[3164]: E0707 00:01:41.754350 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.755185 kubelet[3164]: W0707 00:01:41.754363 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.755185 kubelet[3164]: E0707 00:01:41.754378 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.755185 kubelet[3164]: E0707 00:01:41.754667 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.755185 kubelet[3164]: W0707 00:01:41.754678 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.755185 kubelet[3164]: E0707 00:01:41.754701 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.755185 kubelet[3164]: E0707 00:01:41.754973 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.755185 kubelet[3164]: W0707 00:01:41.754989 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.755185 kubelet[3164]: E0707 00:01:41.755002 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.757494 kubelet[3164]: E0707 00:01:41.755309 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.757494 kubelet[3164]: W0707 00:01:41.755331 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.757494 kubelet[3164]: E0707 00:01:41.755345 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.757494 kubelet[3164]: E0707 00:01:41.755729 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.757494 kubelet[3164]: W0707 00:01:41.755750 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.757494 kubelet[3164]: E0707 00:01:41.755764 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.757494 kubelet[3164]: E0707 00:01:41.756370 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.757494 kubelet[3164]: W0707 00:01:41.756404 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.757494 kubelet[3164]: E0707 00:01:41.756418 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.757494 kubelet[3164]: E0707 00:01:41.756715 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.760176 kubelet[3164]: W0707 00:01:41.756725 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.760176 kubelet[3164]: E0707 00:01:41.756737 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.760176 kubelet[3164]: E0707 00:01:41.757059 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.760176 kubelet[3164]: W0707 00:01:41.757071 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.760176 kubelet[3164]: E0707 00:01:41.757194 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.760176 kubelet[3164]: E0707 00:01:41.758373 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.760176 kubelet[3164]: W0707 00:01:41.758387 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.760176 kubelet[3164]: E0707 00:01:41.758404 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.760176 kubelet[3164]: E0707 00:01:41.758635 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.760176 kubelet[3164]: W0707 00:01:41.758645 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.760589 kubelet[3164]: E0707 00:01:41.758658 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.760589 kubelet[3164]: E0707 00:01:41.758863 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.760589 kubelet[3164]: W0707 00:01:41.758872 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.760589 kubelet[3164]: E0707 00:01:41.758883 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.773873 containerd[1969]: time="2025-07-07T00:01:41.767494159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:41.773873 containerd[1969]: time="2025-07-07T00:01:41.767563873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:41.773873 containerd[1969]: time="2025-07-07T00:01:41.767599281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:41.773873 containerd[1969]: time="2025-07-07T00:01:41.767723667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:41.784940 kubelet[3164]: E0707 00:01:41.781464 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.784940 kubelet[3164]: W0707 00:01:41.781490 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.784940 kubelet[3164]: E0707 00:01:41.781514 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.784940 kubelet[3164]: E0707 00:01:41.782273 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.784940 kubelet[3164]: W0707 00:01:41.782301 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.784940 kubelet[3164]: E0707 00:01:41.782324 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.784940 kubelet[3164]: E0707 00:01:41.782888 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.784940 kubelet[3164]: W0707 00:01:41.782902 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.784940 kubelet[3164]: E0707 00:01:41.782919 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.785423 kubelet[3164]: I0707 00:01:41.782561 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/15afd7c1-8f7a-46ff-bed1-960e12f70585-socket-dir\") pod \"csi-node-driver-prlf8\" (UID: \"15afd7c1-8f7a-46ff-bed1-960e12f70585\") " pod="calico-system/csi-node-driver-prlf8" Jul 7 00:01:41.785423 kubelet[3164]: E0707 00:01:41.784943 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.785423 kubelet[3164]: W0707 00:01:41.784959 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.785423 kubelet[3164]: E0707 00:01:41.784978 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.785423 kubelet[3164]: I0707 00:01:41.785006 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/15afd7c1-8f7a-46ff-bed1-960e12f70585-registration-dir\") pod \"csi-node-driver-prlf8\" (UID: \"15afd7c1-8f7a-46ff-bed1-960e12f70585\") " pod="calico-system/csi-node-driver-prlf8" Jul 7 00:01:41.786851 kubelet[3164]: E0707 00:01:41.786803 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.786851 kubelet[3164]: W0707 00:01:41.786830 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.786851 kubelet[3164]: E0707 00:01:41.786850 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.787093 kubelet[3164]: I0707 00:01:41.786877 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/15afd7c1-8f7a-46ff-bed1-960e12f70585-varrun\") pod \"csi-node-driver-prlf8\" (UID: \"15afd7c1-8f7a-46ff-bed1-960e12f70585\") " pod="calico-system/csi-node-driver-prlf8" Jul 7 00:01:41.795226 kubelet[3164]: E0707 00:01:41.788534 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.795226 kubelet[3164]: W0707 00:01:41.788558 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.795226 kubelet[3164]: E0707 00:01:41.788592 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.795226 kubelet[3164]: I0707 00:01:41.788622 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzwkc\" (UniqueName: \"kubernetes.io/projected/15afd7c1-8f7a-46ff-bed1-960e12f70585-kube-api-access-vzwkc\") pod \"csi-node-driver-prlf8\" (UID: \"15afd7c1-8f7a-46ff-bed1-960e12f70585\") " pod="calico-system/csi-node-driver-prlf8" Jul 7 00:01:41.795226 kubelet[3164]: E0707 00:01:41.788926 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.795226 kubelet[3164]: W0707 00:01:41.788938 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.795226 kubelet[3164]: E0707 00:01:41.789036 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.795226 kubelet[3164]: I0707 00:01:41.789065 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15afd7c1-8f7a-46ff-bed1-960e12f70585-kubelet-dir\") pod \"csi-node-driver-prlf8\" (UID: \"15afd7c1-8f7a-46ff-bed1-960e12f70585\") " pod="calico-system/csi-node-driver-prlf8" Jul 7 00:01:41.795226 kubelet[3164]: E0707 00:01:41.789344 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.795745 kubelet[3164]: W0707 00:01:41.789356 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.795745 kubelet[3164]: E0707 00:01:41.789467 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.795745 kubelet[3164]: E0707 00:01:41.789880 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.795745 kubelet[3164]: W0707 00:01:41.789891 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.795745 kubelet[3164]: E0707 00:01:41.789910 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.795745 kubelet[3164]: E0707 00:01:41.791542 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.795745 kubelet[3164]: W0707 00:01:41.791554 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.795745 kubelet[3164]: E0707 00:01:41.791584 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.795745 kubelet[3164]: E0707 00:01:41.791829 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.795745 kubelet[3164]: W0707 00:01:41.791839 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.796133 kubelet[3164]: E0707 00:01:41.791866 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.796133 kubelet[3164]: E0707 00:01:41.792121 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.796133 kubelet[3164]: W0707 00:01:41.792132 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.796133 kubelet[3164]: E0707 00:01:41.792168 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.796133 kubelet[3164]: E0707 00:01:41.793085 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.796133 kubelet[3164]: W0707 00:01:41.793098 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.796133 kubelet[3164]: E0707 00:01:41.793113 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.796133 kubelet[3164]: E0707 00:01:41.794142 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.796133 kubelet[3164]: W0707 00:01:41.794155 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.796133 kubelet[3164]: E0707 00:01:41.794170 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.796552 kubelet[3164]: E0707 00:01:41.794992 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.796552 kubelet[3164]: W0707 00:01:41.795004 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.796552 kubelet[3164]: E0707 00:01:41.795018 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.807552 systemd[1]: Started cri-containerd-862a64f2d594037fefd578c7c781ee56ed94dfe15e70a0ab14c798ca8194beba.scope - libcontainer container 862a64f2d594037fefd578c7c781ee56ed94dfe15e70a0ab14c798ca8194beba. Jul 7 00:01:41.893688 kubelet[3164]: E0707 00:01:41.893543 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.893688 kubelet[3164]: W0707 00:01:41.893567 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.893688 kubelet[3164]: E0707 00:01:41.893608 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.897534 kubelet[3164]: E0707 00:01:41.895452 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.897534 kubelet[3164]: W0707 00:01:41.897289 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.897534 kubelet[3164]: E0707 00:01:41.897323 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.902728 kubelet[3164]: E0707 00:01:41.901961 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.902728 kubelet[3164]: W0707 00:01:41.901989 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.902728 kubelet[3164]: E0707 00:01:41.902493 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.903970 kubelet[3164]: E0707 00:01:41.903909 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.903970 kubelet[3164]: W0707 00:01:41.903929 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.904521 kubelet[3164]: E0707 00:01:41.904433 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.906455 kubelet[3164]: E0707 00:01:41.906121 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.906455 kubelet[3164]: W0707 00:01:41.906142 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.907065 kubelet[3164]: E0707 00:01:41.906965 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.908206 kubelet[3164]: E0707 00:01:41.908189 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.908662 kubelet[3164]: W0707 00:01:41.908546 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.909120 kubelet[3164]: E0707 00:01:41.908906 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.909742 kubelet[3164]: E0707 00:01:41.909627 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.909742 kubelet[3164]: W0707 00:01:41.909642 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.909742 kubelet[3164]: E0707 00:01:41.909706 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.910673 kubelet[3164]: E0707 00:01:41.910222 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.910673 kubelet[3164]: W0707 00:01:41.910624 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.911109 kubelet[3164]: E0707 00:01:41.911002 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.911109 kubelet[3164]: W0707 00:01:41.911018 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.911361 kubelet[3164]: E0707 00:01:41.911313 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.911361 kubelet[3164]: E0707 00:01:41.911343 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.911530 kubelet[3164]: E0707 00:01:41.911518 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.911659 kubelet[3164]: W0707 00:01:41.911593 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.911800 kubelet[3164]: E0707 00:01:41.911729 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.912109 kubelet[3164]: E0707 00:01:41.912080 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.912109 kubelet[3164]: W0707 00:01:41.912093 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.912694 kubelet[3164]: E0707 00:01:41.912650 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.913004 kubelet[3164]: E0707 00:01:41.912909 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.913004 kubelet[3164]: W0707 00:01:41.912922 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.913227 kubelet[3164]: E0707 00:01:41.913117 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.913485 kubelet[3164]: E0707 00:01:41.913351 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.913485 kubelet[3164]: W0707 00:01:41.913364 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.913743 kubelet[3164]: E0707 00:01:41.913650 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.914155 kubelet[3164]: E0707 00:01:41.914058 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.914380 kubelet[3164]: W0707 00:01:41.914288 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.914770 kubelet[3164]: E0707 00:01:41.914643 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.915419 kubelet[3164]: E0707 00:01:41.915310 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.915419 kubelet[3164]: W0707 00:01:41.915325 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.915671 kubelet[3164]: E0707 00:01:41.915542 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.915905 kubelet[3164]: E0707 00:01:41.915803 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.915905 kubelet[3164]: W0707 00:01:41.915816 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.916683 kubelet[3164]: E0707 00:01:41.916016 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.916748 containerd[1969]: time="2025-07-07T00:01:41.916600311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wr85v,Uid:c01bfcc3-e3b3-4132-9113-040987f8f501,Namespace:calico-system,Attempt:0,} returns sandbox id \"862a64f2d594037fefd578c7c781ee56ed94dfe15e70a0ab14c798ca8194beba\"" Jul 7 00:01:41.917122 kubelet[3164]: E0707 00:01:41.916979 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.917122 kubelet[3164]: W0707 00:01:41.916993 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.917592 kubelet[3164]: E0707 00:01:41.917454 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.917775 kubelet[3164]: E0707 00:01:41.917719 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.917775 kubelet[3164]: W0707 00:01:41.917732 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.917948 kubelet[3164]: E0707 00:01:41.917827 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.919262 kubelet[3164]: E0707 00:01:41.919113 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.919262 kubelet[3164]: W0707 00:01:41.919128 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.919641 kubelet[3164]: E0707 00:01:41.919509 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.920117 kubelet[3164]: E0707 00:01:41.919925 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.920117 kubelet[3164]: W0707 00:01:41.919938 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.920423 kubelet[3164]: E0707 00:01:41.920285 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.920880 kubelet[3164]: E0707 00:01:41.920742 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.920880 kubelet[3164]: W0707 00:01:41.920755 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.921023 kubelet[3164]: E0707 00:01:41.921006 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.921376 kubelet[3164]: E0707 00:01:41.921284 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.921376 kubelet[3164]: W0707 00:01:41.921297 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.921700 kubelet[3164]: E0707 00:01:41.921542 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.922059 kubelet[3164]: E0707 00:01:41.921929 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.922059 kubelet[3164]: W0707 00:01:41.921943 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.922273 kubelet[3164]: E0707 00:01:41.922182 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.923231 kubelet[3164]: E0707 00:01:41.922776 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.923231 kubelet[3164]: W0707 00:01:41.922789 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.923231 kubelet[3164]: E0707 00:01:41.922828 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.924341 kubelet[3164]: E0707 00:01:41.924326 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.924450 kubelet[3164]: W0707 00:01:41.924436 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.924552 kubelet[3164]: E0707 00:01:41.924536 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:41.938383 kubelet[3164]: E0707 00:01:41.938303 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:41.938383 kubelet[3164]: W0707 00:01:41.938321 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:41.938383 kubelet[3164]: E0707 00:01:41.938346 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:43.323848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725319089.mount: Deactivated successfully. Jul 7 00:01:43.630548 kubelet[3164]: E0707 00:01:43.629403 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prlf8" podUID="15afd7c1-8f7a-46ff-bed1-960e12f70585" Jul 7 00:01:44.166968 containerd[1969]: time="2025-07-07T00:01:44.166904991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:44.169600 containerd[1969]: time="2025-07-07T00:01:44.169518236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:01:44.185282 containerd[1969]: time="2025-07-07T00:01:44.184938527Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:44.190110 containerd[1969]: time="2025-07-07T00:01:44.188891310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:44.190110 containerd[1969]: time="2025-07-07T00:01:44.189666154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.460060036s" Jul 7 00:01:44.190110 containerd[1969]: time="2025-07-07T00:01:44.189707963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:01:44.191854 containerd[1969]: time="2025-07-07T00:01:44.191484143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:01:44.214158 containerd[1969]: time="2025-07-07T00:01:44.214112403Z" level=info msg="CreateContainer within sandbox \"53a6ca0feae231b163b7ea0af97ce75ac30efb15f94f3b298ca453e991d93135\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:01:44.233705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070456027.mount: Deactivated successfully. Jul 7 00:01:44.236747 containerd[1969]: time="2025-07-07T00:01:44.236541669Z" level=info msg="CreateContainer within sandbox \"53a6ca0feae231b163b7ea0af97ce75ac30efb15f94f3b298ca453e991d93135\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"000142d32564dd3a94debe014e7c0d3e7afc8ae863e0b961df3748476c1a73e2\"" Jul 7 00:01:44.238169 containerd[1969]: time="2025-07-07T00:01:44.237524276Z" level=info msg="StartContainer for \"000142d32564dd3a94debe014e7c0d3e7afc8ae863e0b961df3748476c1a73e2\"" Jul 7 00:01:44.294507 systemd[1]: Started cri-containerd-000142d32564dd3a94debe014e7c0d3e7afc8ae863e0b961df3748476c1a73e2.scope - libcontainer container 000142d32564dd3a94debe014e7c0d3e7afc8ae863e0b961df3748476c1a73e2. Jul 7 00:01:44.394186 containerd[1969]: time="2025-07-07T00:01:44.394136684Z" level=info msg="StartContainer for \"000142d32564dd3a94debe014e7c0d3e7afc8ae863e0b961df3748476c1a73e2\" returns successfully" Jul 7 00:01:44.779978 kubelet[3164]: I0707 00:01:44.779891 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-589bfc449c-xll82" podStartSLOduration=2.317854293 podStartE2EDuration="4.77986721s" podCreationTimestamp="2025-07-07 00:01:40 +0000 UTC" firstStartedPulling="2025-07-07 00:01:41.729191218 +0000 UTC m=+21.248987784" lastFinishedPulling="2025-07-07 00:01:44.191204118 +0000 UTC m=+23.711000701" observedRunningTime="2025-07-07 00:01:44.778474278 +0000 UTC m=+24.298270864" watchObservedRunningTime="2025-07-07 00:01:44.77986721 +0000 UTC m=+24.299663794" Jul 7 00:01:44.781276 kubelet[3164]: E0707 00:01:44.780815 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.781415 kubelet[3164]: W0707 00:01:44.781282 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.781415 kubelet[3164]: E0707 00:01:44.781314 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.781698 kubelet[3164]: E0707 00:01:44.781681 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.781784 kubelet[3164]: W0707 00:01:44.781698 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.781784 kubelet[3164]: E0707 00:01:44.781717 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.782735 kubelet[3164]: E0707 00:01:44.782707 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.782839 kubelet[3164]: W0707 00:01:44.782738 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.782839 kubelet[3164]: E0707 00:01:44.782757 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.783792 kubelet[3164]: E0707 00:01:44.783771 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.783792 kubelet[3164]: W0707 00:01:44.783791 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.784037 kubelet[3164]: E0707 00:01:44.783807 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.785744 kubelet[3164]: E0707 00:01:44.785645 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.785744 kubelet[3164]: W0707 00:01:44.785677 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.785744 kubelet[3164]: E0707 00:01:44.785694 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.786063 kubelet[3164]: E0707 00:01:44.785993 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.786063 kubelet[3164]: W0707 00:01:44.786005 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.786063 kubelet[3164]: E0707 00:01:44.786018 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.786353 kubelet[3164]: E0707 00:01:44.786316 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.786353 kubelet[3164]: W0707 00:01:44.786328 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.786353 kubelet[3164]: E0707 00:01:44.786341 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.786837 kubelet[3164]: E0707 00:01:44.786712 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.786837 kubelet[3164]: W0707 00:01:44.786727 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.786837 kubelet[3164]: E0707 00:01:44.786740 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.787479 kubelet[3164]: E0707 00:01:44.787459 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.787479 kubelet[3164]: W0707 00:01:44.787476 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.787610 kubelet[3164]: E0707 00:01:44.787491 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.789597 kubelet[3164]: E0707 00:01:44.789573 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.789597 kubelet[3164]: W0707 00:01:44.789594 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.789728 kubelet[3164]: E0707 00:01:44.789613 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.790278 kubelet[3164]: E0707 00:01:44.789896 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.790278 kubelet[3164]: W0707 00:01:44.789907 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.790278 kubelet[3164]: E0707 00:01:44.789922 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.790278 kubelet[3164]: E0707 00:01:44.790135 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.790278 kubelet[3164]: W0707 00:01:44.790143 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.790278 kubelet[3164]: E0707 00:01:44.790154 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.790725 kubelet[3164]: E0707 00:01:44.790463 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.790725 kubelet[3164]: W0707 00:01:44.790474 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.790725 kubelet[3164]: E0707 00:01:44.790488 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.790725 kubelet[3164]: E0707 00:01:44.790702 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.790725 kubelet[3164]: W0707 00:01:44.790712 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.790725 kubelet[3164]: E0707 00:01:44.790723 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.790996 kubelet[3164]: E0707 00:01:44.790930 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.790996 kubelet[3164]: W0707 00:01:44.790939 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.790996 kubelet[3164]: E0707 00:01:44.790950 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.847119 kubelet[3164]: E0707 00:01:44.847082 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.847119 kubelet[3164]: W0707 00:01:44.847115 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.847333 kubelet[3164]: E0707 00:01:44.847139 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.847550 kubelet[3164]: E0707 00:01:44.847529 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.847550 kubelet[3164]: W0707 00:01:44.847549 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.849342 kubelet[3164]: E0707 00:01:44.849309 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.849698 kubelet[3164]: E0707 00:01:44.849676 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.849698 kubelet[3164]: W0707 00:01:44.849697 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.849847 kubelet[3164]: E0707 00:01:44.849743 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.850048 kubelet[3164]: E0707 00:01:44.850031 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.850048 kubelet[3164]: W0707 00:01:44.850048 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.850176 kubelet[3164]: E0707 00:01:44.850074 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.850347 kubelet[3164]: E0707 00:01:44.850331 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.850400 kubelet[3164]: W0707 00:01:44.850348 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.850400 kubelet[3164]: E0707 00:01:44.850374 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.850680 kubelet[3164]: E0707 00:01:44.850661 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.850680 kubelet[3164]: W0707 00:01:44.850678 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.850791 kubelet[3164]: E0707 00:01:44.850704 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.853431 kubelet[3164]: E0707 00:01:44.853400 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.853431 kubelet[3164]: W0707 00:01:44.853422 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.853574 kubelet[3164]: E0707 00:01:44.853446 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.854274 kubelet[3164]: E0707 00:01:44.853773 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.854274 kubelet[3164]: W0707 00:01:44.853787 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.854274 kubelet[3164]: E0707 00:01:44.853879 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.854274 kubelet[3164]: E0707 00:01:44.854072 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.854274 kubelet[3164]: W0707 00:01:44.854081 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.854274 kubelet[3164]: E0707 00:01:44.854192 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.854618 kubelet[3164]: E0707 00:01:44.854353 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.854618 kubelet[3164]: W0707 00:01:44.854363 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.854618 kubelet[3164]: E0707 00:01:44.854443 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.854759 kubelet[3164]: E0707 00:01:44.854644 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.854759 kubelet[3164]: W0707 00:01:44.854654 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.854759 kubelet[3164]: E0707 00:01:44.854682 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.854996 kubelet[3164]: E0707 00:01:44.854904 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.854996 kubelet[3164]: W0707 00:01:44.854917 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.854996 kubelet[3164]: E0707 00:01:44.854940 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.855511 kubelet[3164]: E0707 00:01:44.855412 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.855511 kubelet[3164]: W0707 00:01:44.855425 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.855733 kubelet[3164]: E0707 00:01:44.855711 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.856814 kubelet[3164]: E0707 00:01:44.856788 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.856814 kubelet[3164]: W0707 00:01:44.856807 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.856958 kubelet[3164]: E0707 00:01:44.856826 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.857069 kubelet[3164]: E0707 00:01:44.857053 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.857121 kubelet[3164]: W0707 00:01:44.857069 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.859318 kubelet[3164]: E0707 00:01:44.857129 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.859318 kubelet[3164]: E0707 00:01:44.858462 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.859318 kubelet[3164]: W0707 00:01:44.858474 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.859318 kubelet[3164]: E0707 00:01:44.858499 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.859318 kubelet[3164]: E0707 00:01:44.858773 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.859318 kubelet[3164]: W0707 00:01:44.858783 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.859318 kubelet[3164]: E0707 00:01:44.858795 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:44.859318 kubelet[3164]: E0707 00:01:44.859217 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:44.859318 kubelet[3164]: W0707 00:01:44.859226 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:44.859318 kubelet[3164]: E0707 00:01:44.859318 3164 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:45.490282 containerd[1969]: time="2025-07-07T00:01:45.488283835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:01:45.490282 containerd[1969]: time="2025-07-07T00:01:45.488385746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:45.490282 containerd[1969]: time="2025-07-07T00:01:45.489504688Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:45.490282 containerd[1969]: time="2025-07-07T00:01:45.490090719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:45.492188 containerd[1969]: time="2025-07-07T00:01:45.491150709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.299631432s" Jul 7 00:01:45.492188 containerd[1969]: time="2025-07-07T00:01:45.491178579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:01:45.493655 containerd[1969]: time="2025-07-07T00:01:45.493609152Z" level=info msg="CreateContainer within sandbox \"862a64f2d594037fefd578c7c781ee56ed94dfe15e70a0ab14c798ca8194beba\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:01:45.512301 containerd[1969]: time="2025-07-07T00:01:45.512253196Z" level=info msg="CreateContainer within sandbox \"862a64f2d594037fefd578c7c781ee56ed94dfe15e70a0ab14c798ca8194beba\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ef5a158414e26894d61526a95a3e615192514a0ac1397a1628a19e2b8aa4fce7\"" Jul 7 00:01:45.514513 containerd[1969]: time="2025-07-07T00:01:45.514093371Z" level=info msg="StartContainer for \"ef5a158414e26894d61526a95a3e615192514a0ac1397a1628a19e2b8aa4fce7\"" Jul 7 00:01:45.603417 systemd[1]: Started cri-containerd-ef5a158414e26894d61526a95a3e615192514a0ac1397a1628a19e2b8aa4fce7.scope - libcontainer container ef5a158414e26894d61526a95a3e615192514a0ac1397a1628a19e2b8aa4fce7. Jul 7 00:01:45.630495 kubelet[3164]: E0707 00:01:45.629725 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prlf8" podUID="15afd7c1-8f7a-46ff-bed1-960e12f70585" Jul 7 00:01:45.658370 containerd[1969]: time="2025-07-07T00:01:45.658332441Z" level=info msg="StartContainer for \"ef5a158414e26894d61526a95a3e615192514a0ac1397a1628a19e2b8aa4fce7\" returns successfully" Jul 7 00:01:45.676762 systemd[1]: cri-containerd-ef5a158414e26894d61526a95a3e615192514a0ac1397a1628a19e2b8aa4fce7.scope: Deactivated successfully. Jul 7 00:01:45.713854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef5a158414e26894d61526a95a3e615192514a0ac1397a1628a19e2b8aa4fce7-rootfs.mount: Deactivated successfully. Jul 7 00:01:45.736948 kubelet[3164]: I0707 00:01:45.736657 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:01:45.874453 containerd[1969]: time="2025-07-07T00:01:45.847861526Z" level=info msg="shim disconnected" id=ef5a158414e26894d61526a95a3e615192514a0ac1397a1628a19e2b8aa4fce7 namespace=k8s.io Jul 7 00:01:45.874453 containerd[1969]: time="2025-07-07T00:01:45.874276780Z" level=warning msg="cleaning up after shim disconnected" id=ef5a158414e26894d61526a95a3e615192514a0ac1397a1628a19e2b8aa4fce7 namespace=k8s.io Jul 7 00:01:45.874453 containerd[1969]: time="2025-07-07T00:01:45.874294792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:01:46.742527 containerd[1969]: time="2025-07-07T00:01:46.742488822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:01:47.629373 kubelet[3164]: E0707 00:01:47.629325 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prlf8" podUID="15afd7c1-8f7a-46ff-bed1-960e12f70585" Jul 7 00:01:49.629508 kubelet[3164]: E0707 00:01:49.629440 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prlf8" podUID="15afd7c1-8f7a-46ff-bed1-960e12f70585" Jul 7 00:01:50.121254 containerd[1969]: time="2025-07-07T00:01:50.121111108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:50.122864 containerd[1969]: time="2025-07-07T00:01:50.122649767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:01:50.125044 containerd[1969]: time="2025-07-07T00:01:50.123844804Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:50.126602 containerd[1969]: time="2025-07-07T00:01:50.126552249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:50.127309 containerd[1969]: time="2025-07-07T00:01:50.127272176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.384743002s" Jul 7 00:01:50.127400 containerd[1969]: time="2025-07-07T00:01:50.127315299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:01:50.130723 containerd[1969]: time="2025-07-07T00:01:50.130420419Z" level=info msg="CreateContainer within sandbox \"862a64f2d594037fefd578c7c781ee56ed94dfe15e70a0ab14c798ca8194beba\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:01:50.160212 containerd[1969]: time="2025-07-07T00:01:50.160170410Z" level=info msg="CreateContainer within sandbox \"862a64f2d594037fefd578c7c781ee56ed94dfe15e70a0ab14c798ca8194beba\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"80c0e1ffd71227e929727a0c1ca8ba8bef4e57d309ce286adbc926ea2b74547b\"" Jul 7 00:01:50.180196 containerd[1969]: time="2025-07-07T00:01:50.180126318Z" level=info msg="StartContainer for \"80c0e1ffd71227e929727a0c1ca8ba8bef4e57d309ce286adbc926ea2b74547b\"" Jul 7 00:01:50.220060 systemd[1]: run-containerd-runc-k8s.io-80c0e1ffd71227e929727a0c1ca8ba8bef4e57d309ce286adbc926ea2b74547b-runc.1vgCg6.mount: Deactivated successfully. Jul 7 00:01:50.228786 systemd[1]: Started cri-containerd-80c0e1ffd71227e929727a0c1ca8ba8bef4e57d309ce286adbc926ea2b74547b.scope - libcontainer container 80c0e1ffd71227e929727a0c1ca8ba8bef4e57d309ce286adbc926ea2b74547b. Jul 7 00:01:50.292090 containerd[1969]: time="2025-07-07T00:01:50.291311271Z" level=info msg="StartContainer for \"80c0e1ffd71227e929727a0c1ca8ba8bef4e57d309ce286adbc926ea2b74547b\" returns successfully" Jul 7 00:01:51.153893 systemd[1]: cri-containerd-80c0e1ffd71227e929727a0c1ca8ba8bef4e57d309ce286adbc926ea2b74547b.scope: Deactivated successfully. Jul 7 00:01:51.209072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80c0e1ffd71227e929727a0c1ca8ba8bef4e57d309ce286adbc926ea2b74547b-rootfs.mount: Deactivated successfully. Jul 7 00:01:51.214508 kubelet[3164]: I0707 00:01:51.214317 3164 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:01:51.387322 containerd[1969]: time="2025-07-07T00:01:51.387219244Z" level=info msg="shim disconnected" id=80c0e1ffd71227e929727a0c1ca8ba8bef4e57d309ce286adbc926ea2b74547b namespace=k8s.io Jul 7 00:01:51.387877 containerd[1969]: time="2025-07-07T00:01:51.387335685Z" level=warning msg="cleaning up after shim disconnected" id=80c0e1ffd71227e929727a0c1ca8ba8bef4e57d309ce286adbc926ea2b74547b namespace=k8s.io Jul 7 00:01:51.387877 containerd[1969]: time="2025-07-07T00:01:51.387349915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:01:51.397451 systemd[1]: Created slice kubepods-burstable-pod4e01e08e_a6ee_4ae7_9f88_e30bb7b73f15.slice - libcontainer container kubepods-burstable-pod4e01e08e_a6ee_4ae7_9f88_e30bb7b73f15.slice. Jul 7 00:01:51.408426 systemd[1]: Created slice kubepods-besteffort-pod323f6ecf_2b19_407b_9a25_f503af5af82a.slice - libcontainer container kubepods-besteffort-pod323f6ecf_2b19_407b_9a25_f503af5af82a.slice. Jul 7 00:01:51.412294 kubelet[3164]: I0707 00:01:51.411816 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f08f069-babf-4598-9d97-921e74c663ed-whisker-backend-key-pair\") pod \"whisker-55469f4b7-fbf5p\" (UID: \"6f08f069-babf-4598-9d97-921e74c663ed\") " pod="calico-system/whisker-55469f4b7-fbf5p" Jul 7 00:01:51.412294 kubelet[3164]: I0707 00:01:51.411865 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f08f069-babf-4598-9d97-921e74c663ed-whisker-ca-bundle\") pod \"whisker-55469f4b7-fbf5p\" (UID: \"6f08f069-babf-4598-9d97-921e74c663ed\") " pod="calico-system/whisker-55469f4b7-fbf5p" Jul 7 00:01:51.412294 kubelet[3164]: I0707 00:01:51.411894 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2c5\" (UniqueName: \"kubernetes.io/projected/323f6ecf-2b19-407b-9a25-f503af5af82a-kube-api-access-st2c5\") pod \"calico-kube-controllers-7d9458bfc6-g2s8x\" (UID: \"323f6ecf-2b19-407b-9a25-f503af5af82a\") " pod="calico-system/calico-kube-controllers-7d9458bfc6-g2s8x" Jul 7 00:01:51.412294 kubelet[3164]: I0707 00:01:51.411925 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbjwh\" (UniqueName: \"kubernetes.io/projected/abc995fc-0743-4302-b622-a778795b2bb6-kube-api-access-mbjwh\") pod \"calico-apiserver-68f9b8bfdc-lcq8w\" (UID: \"abc995fc-0743-4302-b622-a778795b2bb6\") " pod="calico-apiserver/calico-apiserver-68f9b8bfdc-lcq8w" Jul 7 00:01:51.412294 kubelet[3164]: I0707 00:01:51.411949 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/882b3fc4-2da7-43e6-941f-358a9356321e-calico-apiserver-certs\") pod \"calico-apiserver-68f9b8bfdc-qtjjz\" (UID: \"882b3fc4-2da7-43e6-941f-358a9356321e\") " pod="calico-apiserver/calico-apiserver-68f9b8bfdc-qtjjz" Jul 7 00:01:51.414228 kubelet[3164]: I0707 00:01:51.411981 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztl7m\" (UniqueName: \"kubernetes.io/projected/3424677f-768b-40b5-b2e4-d681424e64e2-kube-api-access-ztl7m\") pod \"goldmane-768f4c5c69-8f2qq\" (UID: \"3424677f-768b-40b5-b2e4-d681424e64e2\") " pod="calico-system/goldmane-768f4c5c69-8f2qq" Jul 7 00:01:51.414228 kubelet[3164]: I0707 00:01:51.412014 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlg77\" (UniqueName: \"kubernetes.io/projected/6f08f069-babf-4598-9d97-921e74c663ed-kube-api-access-tlg77\") pod \"whisker-55469f4b7-fbf5p\" (UID: \"6f08f069-babf-4598-9d97-921e74c663ed\") " pod="calico-system/whisker-55469f4b7-fbf5p" Jul 7 00:01:51.414228 kubelet[3164]: I0707 00:01:51.412039 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15-config-volume\") pod \"coredns-668d6bf9bc-bf6gq\" (UID: \"4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15\") " pod="kube-system/coredns-668d6bf9bc-bf6gq" Jul 7 00:01:51.414228 kubelet[3164]: I0707 00:01:51.412063 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3424677f-768b-40b5-b2e4-d681424e64e2-config\") pod \"goldmane-768f4c5c69-8f2qq\" (UID: \"3424677f-768b-40b5-b2e4-d681424e64e2\") " pod="calico-system/goldmane-768f4c5c69-8f2qq" Jul 7 00:01:51.414228 kubelet[3164]: I0707 00:01:51.412091 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3424677f-768b-40b5-b2e4-d681424e64e2-goldmane-key-pair\") pod \"goldmane-768f4c5c69-8f2qq\" (UID: \"3424677f-768b-40b5-b2e4-d681424e64e2\") " pod="calico-system/goldmane-768f4c5c69-8f2qq" Jul 7 00:01:51.415167 kubelet[3164]: I0707 00:01:51.412124 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trg7g\" (UniqueName: \"kubernetes.io/projected/4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15-kube-api-access-trg7g\") pod \"coredns-668d6bf9bc-bf6gq\" (UID: \"4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15\") " pod="kube-system/coredns-668d6bf9bc-bf6gq" Jul 7 00:01:51.415167 kubelet[3164]: I0707 00:01:51.412154 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3424677f-768b-40b5-b2e4-d681424e64e2-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-8f2qq\" (UID: \"3424677f-768b-40b5-b2e4-d681424e64e2\") " pod="calico-system/goldmane-768f4c5c69-8f2qq" Jul 7 00:01:51.415167 kubelet[3164]: I0707 00:01:51.412182 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k855\" (UniqueName: \"kubernetes.io/projected/882b3fc4-2da7-43e6-941f-358a9356321e-kube-api-access-9k855\") pod \"calico-apiserver-68f9b8bfdc-qtjjz\" (UID: \"882b3fc4-2da7-43e6-941f-358a9356321e\") " pod="calico-apiserver/calico-apiserver-68f9b8bfdc-qtjjz" Jul 7 00:01:51.415167 kubelet[3164]: I0707 00:01:51.412215 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abc995fc-0743-4302-b622-a778795b2bb6-calico-apiserver-certs\") pod \"calico-apiserver-68f9b8bfdc-lcq8w\" (UID: \"abc995fc-0743-4302-b622-a778795b2bb6\") " pod="calico-apiserver/calico-apiserver-68f9b8bfdc-lcq8w" Jul 7 00:01:51.415167 kubelet[3164]: I0707 00:01:51.412723 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/323f6ecf-2b19-407b-9a25-f503af5af82a-tigera-ca-bundle\") pod \"calico-kube-controllers-7d9458bfc6-g2s8x\" (UID: \"323f6ecf-2b19-407b-9a25-f503af5af82a\") " pod="calico-system/calico-kube-controllers-7d9458bfc6-g2s8x" Jul 7 00:01:51.415422 kubelet[3164]: I0707 00:01:51.412772 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06399cd4-39d2-47a4-bc67-182674acacc1-config-volume\") pod \"coredns-668d6bf9bc-mzs6t\" (UID: \"06399cd4-39d2-47a4-bc67-182674acacc1\") " pod="kube-system/coredns-668d6bf9bc-mzs6t" Jul 7 00:01:51.415422 kubelet[3164]: I0707 00:01:51.412800 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p97c2\" (UniqueName: \"kubernetes.io/projected/06399cd4-39d2-47a4-bc67-182674acacc1-kube-api-access-p97c2\") pod \"coredns-668d6bf9bc-mzs6t\" (UID: \"06399cd4-39d2-47a4-bc67-182674acacc1\") " pod="kube-system/coredns-668d6bf9bc-mzs6t" Jul 7 00:01:51.424980 systemd[1]: Created slice kubepods-burstable-pod06399cd4_39d2_47a4_bc67_182674acacc1.slice - libcontainer container kubepods-burstable-pod06399cd4_39d2_47a4_bc67_182674acacc1.slice. Jul 7 00:01:51.435754 systemd[1]: Created slice kubepods-besteffort-pod882b3fc4_2da7_43e6_941f_358a9356321e.slice - libcontainer container kubepods-besteffort-pod882b3fc4_2da7_43e6_941f_358a9356321e.slice. Jul 7 00:01:51.450590 systemd[1]: Created slice kubepods-besteffort-pod6f08f069_babf_4598_9d97_921e74c663ed.slice - libcontainer container kubepods-besteffort-pod6f08f069_babf_4598_9d97_921e74c663ed.slice. Jul 7 00:01:51.461633 systemd[1]: Created slice kubepods-besteffort-podabc995fc_0743_4302_b622_a778795b2bb6.slice - libcontainer container kubepods-besteffort-podabc995fc_0743_4302_b622_a778795b2bb6.slice. Jul 7 00:01:51.472427 systemd[1]: Created slice kubepods-besteffort-pod3424677f_768b_40b5_b2e4_d681424e64e2.slice - libcontainer container kubepods-besteffort-pod3424677f_768b_40b5_b2e4_d681424e64e2.slice. Jul 7 00:01:51.640069 systemd[1]: Created slice kubepods-besteffort-pod15afd7c1_8f7a_46ff_bed1_960e12f70585.slice - libcontainer container kubepods-besteffort-pod15afd7c1_8f7a_46ff_bed1_960e12f70585.slice. Jul 7 00:01:51.644210 containerd[1969]: time="2025-07-07T00:01:51.643740699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prlf8,Uid:15afd7c1-8f7a-46ff-bed1-960e12f70585,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:51.714246 containerd[1969]: time="2025-07-07T00:01:51.714065083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bf6gq,Uid:4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:51.728846 containerd[1969]: time="2025-07-07T00:01:51.728698972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d9458bfc6-g2s8x,Uid:323f6ecf-2b19-407b-9a25-f503af5af82a,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:51.735473 containerd[1969]: time="2025-07-07T00:01:51.734518789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mzs6t,Uid:06399cd4-39d2-47a4-bc67-182674acacc1,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:51.745733 containerd[1969]: time="2025-07-07T00:01:51.745072572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f9b8bfdc-qtjjz,Uid:882b3fc4-2da7-43e6-941f-358a9356321e,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:01:51.757630 containerd[1969]: time="2025-07-07T00:01:51.757484031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55469f4b7-fbf5p,Uid:6f08f069-babf-4598-9d97-921e74c663ed,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:51.778354 containerd[1969]: time="2025-07-07T00:01:51.778308681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8f2qq,Uid:3424677f-768b-40b5-b2e4-d681424e64e2,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:51.778759 containerd[1969]: time="2025-07-07T00:01:51.778721495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f9b8bfdc-lcq8w,Uid:abc995fc-0743-4302-b622-a778795b2bb6,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:01:51.786076 containerd[1969]: time="2025-07-07T00:01:51.786028459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:01:52.538903 containerd[1969]: time="2025-07-07T00:01:52.537914857Z" level=error msg="Failed to destroy network for sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.543231 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285-shm.mount: Deactivated successfully. Jul 7 00:01:52.546848 containerd[1969]: time="2025-07-07T00:01:52.541207312Z" level=error msg="encountered an error cleaning up failed sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.546848 containerd[1969]: time="2025-07-07T00:01:52.546464921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f9b8bfdc-lcq8w,Uid:abc995fc-0743-4302-b622-a778795b2bb6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.570217 containerd[1969]: time="2025-07-07T00:01:52.570169089Z" level=error msg="Failed to destroy network for sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.574362 containerd[1969]: time="2025-07-07T00:01:52.571763094Z" level=error msg="encountered an error cleaning up failed sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.574362 containerd[1969]: time="2025-07-07T00:01:52.571841594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55469f4b7-fbf5p,Uid:6f08f069-babf-4598-9d97-921e74c663ed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.574362 containerd[1969]: time="2025-07-07T00:01:52.572017062Z" level=error msg="Failed to destroy network for sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.574362 containerd[1969]: time="2025-07-07T00:01:52.573470812Z" level=error msg="encountered an error cleaning up failed sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.574362 containerd[1969]: time="2025-07-07T00:01:52.573533389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8f2qq,Uid:3424677f-768b-40b5-b2e4-d681424e64e2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.574362 containerd[1969]: time="2025-07-07T00:01:52.573660588Z" level=error msg="Failed to destroy network for sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.574362 containerd[1969]: time="2025-07-07T00:01:52.573973804Z" level=error msg="encountered an error cleaning up failed sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.574362 containerd[1969]: time="2025-07-07T00:01:52.574026229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prlf8,Uid:15afd7c1-8f7a-46ff-bed1-960e12f70585,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.574362 containerd[1969]: time="2025-07-07T00:01:52.574135335Z" level=error msg="Failed to destroy network for sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.574923 kubelet[3164]: E0707 00:01:52.570451 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.575249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9-shm.mount: Deactivated successfully. Jul 7 00:01:52.577327 containerd[1969]: time="2025-07-07T00:01:52.575724399Z" level=error msg="encountered an error cleaning up failed sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.577327 containerd[1969]: time="2025-07-07T00:01:52.575785262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mzs6t,Uid:06399cd4-39d2-47a4-bc67-182674acacc1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.577327 containerd[1969]: time="2025-07-07T00:01:52.575911600Z" level=error msg="Failed to destroy network for sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.577327 containerd[1969]: time="2025-07-07T00:01:52.576683860Z" level=error msg="Failed to destroy network for sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.579440 kubelet[3164]: E0707 00:01:52.575887 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.579440 kubelet[3164]: E0707 00:01:52.575994 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.579440 kubelet[3164]: E0707 00:01:52.577694 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.579614 containerd[1969]: time="2025-07-07T00:01:52.578082392Z" level=error msg="Failed to destroy network for sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.580788 kubelet[3164]: E0707 00:01:52.577745 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.580900 containerd[1969]: time="2025-07-07T00:01:52.579974059Z" level=error msg="encountered an error cleaning up failed sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.580900 containerd[1969]: time="2025-07-07T00:01:52.580061544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f9b8bfdc-qtjjz,Uid:882b3fc4-2da7-43e6-941f-358a9356321e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.580997 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb-shm.mount: Deactivated successfully. Jul 7 00:01:52.581177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995-shm.mount: Deactivated successfully. Jul 7 00:01:52.581319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635-shm.mount: Deactivated successfully. Jul 7 00:01:52.581407 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e-shm.mount: Deactivated successfully. Jul 7 00:01:52.581489 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1-shm.mount: Deactivated successfully. Jul 7 00:01:52.581563 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00-shm.mount: Deactivated successfully. Jul 7 00:01:52.588371 containerd[1969]: time="2025-07-07T00:01:52.586447127Z" level=error msg="encountered an error cleaning up failed sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.588371 containerd[1969]: time="2025-07-07T00:01:52.586531065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d9458bfc6-g2s8x,Uid:323f6ecf-2b19-407b-9a25-f503af5af82a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.588371 containerd[1969]: time="2025-07-07T00:01:52.587916550Z" level=error msg="encountered an error cleaning up failed sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.588371 containerd[1969]: time="2025-07-07T00:01:52.587984694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bf6gq,Uid:4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.588696 kubelet[3164]: E0707 00:01:52.588164 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.588696 kubelet[3164]: E0707 00:01:52.588220 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.591097 kubelet[3164]: E0707 00:01:52.591055 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:52.601462 kubelet[3164]: E0707 00:01:52.601042 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f9b8bfdc-lcq8w" Jul 7 00:01:52.601462 kubelet[3164]: E0707 00:01:52.601084 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mzs6t" Jul 7 00:01:52.601462 kubelet[3164]: E0707 00:01:52.601113 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-prlf8" Jul 7 00:01:52.601462 kubelet[3164]: E0707 00:01:52.601126 3164 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-prlf8" Jul 7 00:01:52.601778 kubelet[3164]: E0707 00:01:52.601127 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f9b8bfdc-qtjjz" Jul 7 00:01:52.601778 kubelet[3164]: E0707 00:01:52.601139 3164 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f9b8bfdc-qtjjz" Jul 7 00:01:52.601778 kubelet[3164]: E0707 00:01:52.601171 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68f9b8bfdc-qtjjz_calico-apiserver(882b3fc4-2da7-43e6-941f-358a9356321e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68f9b8bfdc-qtjjz_calico-apiserver(882b3fc4-2da7-43e6-941f-358a9356321e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f9b8bfdc-qtjjz" podUID="882b3fc4-2da7-43e6-941f-358a9356321e" Jul 7 00:01:52.602169 kubelet[3164]: E0707 00:01:52.601171 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-prlf8_calico-system(15afd7c1-8f7a-46ff-bed1-960e12f70585)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-prlf8_calico-system(15afd7c1-8f7a-46ff-bed1-960e12f70585)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-prlf8" podUID="15afd7c1-8f7a-46ff-bed1-960e12f70585" Jul 7 00:01:52.602169 kubelet[3164]: E0707 00:01:52.601211 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d9458bfc6-g2s8x" Jul 7 00:01:52.602169 kubelet[3164]: E0707 00:01:52.601223 3164 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d9458bfc6-g2s8x" Jul 7 00:01:52.602424 kubelet[3164]: E0707 00:01:52.601259 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d9458bfc6-g2s8x_calico-system(323f6ecf-2b19-407b-9a25-f503af5af82a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d9458bfc6-g2s8x_calico-system(323f6ecf-2b19-407b-9a25-f503af5af82a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d9458bfc6-g2s8x" podUID="323f6ecf-2b19-407b-9a25-f503af5af82a" Jul 7 00:01:52.602424 kubelet[3164]: E0707 00:01:52.601114 3164 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mzs6t" Jul 7 00:01:52.602424 kubelet[3164]: E0707 00:01:52.601290 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mzs6t_kube-system(06399cd4-39d2-47a4-bc67-182674acacc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mzs6t_kube-system(06399cd4-39d2-47a4-bc67-182674acacc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mzs6t" podUID="06399cd4-39d2-47a4-bc67-182674acacc1" Jul 7 00:01:52.602594 kubelet[3164]: E0707 00:01:52.601071 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55469f4b7-fbf5p" Jul 7 00:01:52.602594 kubelet[3164]: E0707 00:01:52.601314 3164 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55469f4b7-fbf5p" Jul 7 00:01:52.602594 kubelet[3164]: E0707 00:01:52.601336 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55469f4b7-fbf5p_calico-system(6f08f069-babf-4598-9d97-921e74c663ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55469f4b7-fbf5p_calico-system(6f08f069-babf-4598-9d97-921e74c663ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55469f4b7-fbf5p" podUID="6f08f069-babf-4598-9d97-921e74c663ed" Jul 7 00:01:52.602729 kubelet[3164]: E0707 00:01:52.601095 3164 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f9b8bfdc-lcq8w" Jul 7 00:01:52.602729 kubelet[3164]: E0707 00:01:52.601365 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68f9b8bfdc-lcq8w_calico-apiserver(abc995fc-0743-4302-b622-a778795b2bb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68f9b8bfdc-lcq8w_calico-apiserver(abc995fc-0743-4302-b622-a778795b2bb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f9b8bfdc-lcq8w" podUID="abc995fc-0743-4302-b622-a778795b2bb6" Jul 7 00:01:52.602729 kubelet[3164]: E0707 00:01:52.601388 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bf6gq" Jul 7 00:01:52.602850 kubelet[3164]: E0707 00:01:52.601400 3164 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bf6gq" Jul 7 00:01:52.602850 kubelet[3164]: E0707 00:01:52.601424 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bf6gq_kube-system(4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bf6gq_kube-system(4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bf6gq" podUID="4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15" Jul 7 00:01:52.602850 kubelet[3164]: E0707 00:01:52.601043 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8f2qq" Jul 7 00:01:52.602978 kubelet[3164]: E0707 00:01:52.602113 3164 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8f2qq" Jul 7 00:01:52.602978 kubelet[3164]: E0707 00:01:52.602291 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-8f2qq_calico-system(3424677f-768b-40b5-b2e4-d681424e64e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-8f2qq_calico-system(3424677f-768b-40b5-b2e4-d681424e64e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8f2qq" podUID="3424677f-768b-40b5-b2e4-d681424e64e2" Jul 7 00:01:52.810165 kubelet[3164]: I0707 00:01:52.801203 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:01:52.819026 kubelet[3164]: I0707 00:01:52.818981 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:01:52.838389 kubelet[3164]: I0707 00:01:52.838008 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:01:52.840113 kubelet[3164]: I0707 00:01:52.839858 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:01:52.841968 kubelet[3164]: I0707 00:01:52.841939 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:01:52.844063 kubelet[3164]: I0707 00:01:52.844003 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:01:52.862265 kubelet[3164]: I0707 00:01:52.861924 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:01:52.929464 containerd[1969]: time="2025-07-07T00:01:52.928610159Z" level=info msg="StopPodSandbox for \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\"" Jul 7 00:01:52.932327 kubelet[3164]: I0707 00:01:52.930036 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:01:52.940133 containerd[1969]: time="2025-07-07T00:01:52.939659090Z" level=info msg="Ensure that sandbox adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285 in task-service has been cleanup successfully" Jul 7 00:01:52.941502 containerd[1969]: time="2025-07-07T00:01:52.929684312Z" level=info msg="StopPodSandbox for \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\"" Jul 7 00:01:52.941781 containerd[1969]: time="2025-07-07T00:01:52.941748259Z" level=info msg="Ensure that sandbox a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995 in task-service has been cleanup successfully" Jul 7 00:01:52.942704 containerd[1969]: time="2025-07-07T00:01:52.929723164Z" level=info msg="StopPodSandbox for \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\"" Jul 7 00:01:52.942907 containerd[1969]: time="2025-07-07T00:01:52.942882421Z" level=info msg="Ensure that sandbox be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb in task-service has been cleanup successfully" Jul 7 00:01:52.943069 containerd[1969]: time="2025-07-07T00:01:52.929755608Z" level=info msg="StopPodSandbox for \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\"" Jul 7 00:01:52.943350 containerd[1969]: time="2025-07-07T00:01:52.943326288Z" level=info msg="Ensure that sandbox 8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635 in task-service has been cleanup successfully" Jul 7 00:01:52.948330 containerd[1969]: time="2025-07-07T00:01:52.929833997Z" level=info msg="StopPodSandbox for \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\"" Jul 7 00:01:52.948548 containerd[1969]: time="2025-07-07T00:01:52.948521662Z" level=info msg="Ensure that sandbox 57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00 in task-service has been cleanup successfully" Jul 7 00:01:52.952738 containerd[1969]: time="2025-07-07T00:01:52.929865552Z" level=info msg="StopPodSandbox for \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\"" Jul 7 00:01:52.952738 containerd[1969]: time="2025-07-07T00:01:52.952437995Z" level=info msg="Ensure that sandbox f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e in task-service has been cleanup successfully" Jul 7 00:01:52.955692 containerd[1969]: time="2025-07-07T00:01:52.932224372Z" level=info msg="StopPodSandbox for \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\"" Jul 7 00:01:52.964090 containerd[1969]: time="2025-07-07T00:01:52.929800462Z" level=info msg="StopPodSandbox for \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\"" Jul 7 00:01:52.964367 containerd[1969]: time="2025-07-07T00:01:52.964335883Z" level=info msg="Ensure that sandbox a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1 in task-service has been cleanup successfully" Jul 7 00:01:52.966045 containerd[1969]: time="2025-07-07T00:01:52.965998383Z" level=info msg="Ensure that sandbox 4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9 in task-service has been cleanup successfully" Jul 7 00:01:53.100743 containerd[1969]: time="2025-07-07T00:01:53.100574976Z" level=error msg="StopPodSandbox for \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\" failed" error="failed to destroy network for sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:53.116470 containerd[1969]: time="2025-07-07T00:01:53.116385953Z" level=error msg="StopPodSandbox for \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\" failed" error="failed to destroy network for sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:53.117414 kubelet[3164]: E0707 00:01:53.117271 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:01:53.121756 kubelet[3164]: E0707 00:01:53.101409 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:01:53.140752 kubelet[3164]: E0707 00:01:53.131352 3164 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb"} Jul 7 00:01:53.140752 kubelet[3164]: E0707 00:01:53.140411 3164 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3424677f-768b-40b5-b2e4-d681424e64e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:53.140752 kubelet[3164]: E0707 00:01:53.140456 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3424677f-768b-40b5-b2e4-d681424e64e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8f2qq" podUID="3424677f-768b-40b5-b2e4-d681424e64e2" Jul 7 00:01:53.140752 kubelet[3164]: E0707 00:01:53.130360 3164 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00"} Jul 7 00:01:53.141339 kubelet[3164]: E0707 00:01:53.141295 3164 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15afd7c1-8f7a-46ff-bed1-960e12f70585\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:53.141472 kubelet[3164]: E0707 00:01:53.141344 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15afd7c1-8f7a-46ff-bed1-960e12f70585\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-prlf8" podUID="15afd7c1-8f7a-46ff-bed1-960e12f70585" Jul 7 00:01:53.222630 containerd[1969]: time="2025-07-07T00:01:53.222576839Z" level=error msg="StopPodSandbox for \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\" failed" error="failed to destroy network for sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:53.223483 kubelet[3164]: E0707 00:01:53.223057 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:01:53.223483 kubelet[3164]: E0707 00:01:53.223366 3164 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635"} Jul 7 00:01:53.223483 kubelet[3164]: E0707 00:01:53.223411 3164 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"323f6ecf-2b19-407b-9a25-f503af5af82a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:53.223483 kubelet[3164]: E0707 00:01:53.223445 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"323f6ecf-2b19-407b-9a25-f503af5af82a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d9458bfc6-g2s8x" podUID="323f6ecf-2b19-407b-9a25-f503af5af82a" Jul 7 00:01:53.228295 containerd[1969]: time="2025-07-07T00:01:53.228101635Z" level=error msg="StopPodSandbox for \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\" failed" error="failed to destroy network for sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:53.229706 kubelet[3164]: E0707 00:01:53.229450 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:01:53.229706 kubelet[3164]: E0707 00:01:53.229522 3164 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285"} Jul 7 00:01:53.229706 kubelet[3164]: E0707 00:01:53.229574 3164 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abc995fc-0743-4302-b622-a778795b2bb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:53.229706 kubelet[3164]: E0707 00:01:53.229613 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abc995fc-0743-4302-b622-a778795b2bb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f9b8bfdc-lcq8w" podUID="abc995fc-0743-4302-b622-a778795b2bb6" Jul 7 00:01:53.236298 containerd[1969]: time="2025-07-07T00:01:53.236230766Z" level=error msg="StopPodSandbox for \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\" failed" error="failed to destroy network for sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:53.237179 kubelet[3164]: E0707 00:01:53.237129 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:01:53.238271 kubelet[3164]: E0707 00:01:53.237387 3164 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995"} Jul 7 00:01:53.238271 kubelet[3164]: E0707 00:01:53.237438 3164 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"882b3fc4-2da7-43e6-941f-358a9356321e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:53.238271 kubelet[3164]: E0707 00:01:53.237479 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"882b3fc4-2da7-43e6-941f-358a9356321e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f9b8bfdc-qtjjz" podUID="882b3fc4-2da7-43e6-941f-358a9356321e" Jul 7 00:01:53.250963 containerd[1969]: time="2025-07-07T00:01:53.250908252Z" level=error msg="StopPodSandbox for \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\" failed" error="failed to destroy network for sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:53.251439 kubelet[3164]: E0707 00:01:53.251377 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:01:53.251884 containerd[1969]: time="2025-07-07T00:01:53.251662605Z" level=error msg="StopPodSandbox for \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\" failed" error="failed to destroy network for sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:53.251960 kubelet[3164]: E0707 00:01:53.251754 3164 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1"} Jul 7 00:01:53.251960 kubelet[3164]: E0707 00:01:53.251803 3164 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:53.251960 kubelet[3164]: E0707 00:01:53.251843 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bf6gq" podUID="4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15" Jul 7 00:01:53.254139 kubelet[3164]: E0707 00:01:53.253942 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:01:53.254139 kubelet[3164]: E0707 00:01:53.254003 3164 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e"} Jul 7 00:01:53.254139 kubelet[3164]: E0707 00:01:53.254047 3164 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06399cd4-39d2-47a4-bc67-182674acacc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:53.254139 kubelet[3164]: E0707 00:01:53.254082 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06399cd4-39d2-47a4-bc67-182674acacc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mzs6t" podUID="06399cd4-39d2-47a4-bc67-182674acacc1" Jul 7 00:01:53.254775 containerd[1969]: time="2025-07-07T00:01:53.254734467Z" level=error msg="StopPodSandbox for \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\" failed" error="failed to destroy network for sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:53.256509 kubelet[3164]: E0707 00:01:53.256322 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:01:53.256509 kubelet[3164]: E0707 00:01:53.256382 3164 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9"} Jul 7 00:01:53.256509 kubelet[3164]: E0707 00:01:53.256424 3164 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f08f069-babf-4598-9d97-921e74c663ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:53.256509 kubelet[3164]: E0707 00:01:53.256453 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f08f069-babf-4598-9d97-921e74c663ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55469f4b7-fbf5p" podUID="6f08f069-babf-4598-9d97-921e74c663ed" Jul 7 00:01:58.684436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249703677.mount: Deactivated successfully. Jul 7 00:01:58.769359 containerd[1969]: time="2025-07-07T00:01:58.769282724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:01:58.784277 containerd[1969]: time="2025-07-07T00:01:58.782929134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.991859461s" Jul 7 00:01:58.784277 containerd[1969]: time="2025-07-07T00:01:58.783019265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:01:58.791626 containerd[1969]: time="2025-07-07T00:01:58.791446147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.842470 containerd[1969]: time="2025-07-07T00:01:58.842347518Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.843468 containerd[1969]: time="2025-07-07T00:01:58.843126346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.849728 containerd[1969]: time="2025-07-07T00:01:58.849675361Z" level=info msg="CreateContainer within sandbox \"862a64f2d594037fefd578c7c781ee56ed94dfe15e70a0ab14c798ca8194beba\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:01:58.919475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070205886.mount: Deactivated successfully. Jul 7 00:01:58.979455 containerd[1969]: time="2025-07-07T00:01:58.979329312Z" level=info msg="CreateContainer within sandbox \"862a64f2d594037fefd578c7c781ee56ed94dfe15e70a0ab14c798ca8194beba\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a326cff3160af8bc1da2f716ea9893cd5ea6a2073a1a4b4de87d055d6bf6dd0e\"" Jul 7 00:01:58.980338 containerd[1969]: time="2025-07-07T00:01:58.980297695Z" level=info msg="StartContainer for \"a326cff3160af8bc1da2f716ea9893cd5ea6a2073a1a4b4de87d055d6bf6dd0e\"" Jul 7 00:01:59.091655 systemd[1]: Started cri-containerd-a326cff3160af8bc1da2f716ea9893cd5ea6a2073a1a4b4de87d055d6bf6dd0e.scope - libcontainer container a326cff3160af8bc1da2f716ea9893cd5ea6a2073a1a4b4de87d055d6bf6dd0e. Jul 7 00:01:59.144309 containerd[1969]: time="2025-07-07T00:01:59.144177062Z" level=info msg="StartContainer for \"a326cff3160af8bc1da2f716ea9893cd5ea6a2073a1a4b4de87d055d6bf6dd0e\" returns successfully" Jul 7 00:01:59.440421 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:01:59.441725 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:02:00.454298 kubelet[3164]: I0707 00:02:00.433027 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wr85v" podStartSLOduration=2.550293788 podStartE2EDuration="19.414048363s" podCreationTimestamp="2025-07-07 00:01:41 +0000 UTC" firstStartedPulling="2025-07-07 00:01:41.922510333 +0000 UTC m=+21.442306904" lastFinishedPulling="2025-07-07 00:01:58.786264904 +0000 UTC m=+38.306061479" observedRunningTime="2025-07-07 00:02:00.21142288 +0000 UTC m=+39.731219466" watchObservedRunningTime="2025-07-07 00:02:00.414048363 +0000 UTC m=+39.933844947" Jul 7 00:02:00.464287 containerd[1969]: time="2025-07-07T00:02:00.461467531Z" level=info msg="StopPodSandbox for \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\"" Jul 7 00:02:01.156199 kubelet[3164]: I0707 00:02:01.156145 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:00.622 [INFO][4460] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:00.626 [INFO][4460] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" iface="eth0" netns="/var/run/netns/cni-9811d574-d24b-712d-9895-35bceac1bfcc" Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:00.626 [INFO][4460] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" iface="eth0" netns="/var/run/netns/cni-9811d574-d24b-712d-9895-35bceac1bfcc" Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:00.628 [INFO][4460] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" iface="eth0" netns="/var/run/netns/cni-9811d574-d24b-712d-9895-35bceac1bfcc" Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:00.629 [INFO][4460] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:00.629 [INFO][4460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:01.209 [INFO][4467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" HandleID="k8s-pod-network.4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Workload="ip--172--31--20--165-k8s-whisker--55469f4b7--fbf5p-eth0" Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:01.234 [INFO][4467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:01.236 [INFO][4467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:01.304 [WARNING][4467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" HandleID="k8s-pod-network.4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Workload="ip--172--31--20--165-k8s-whisker--55469f4b7--fbf5p-eth0" Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:01.304 [INFO][4467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" HandleID="k8s-pod-network.4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Workload="ip--172--31--20--165-k8s-whisker--55469f4b7--fbf5p-eth0" Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:01.317 [INFO][4467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:01.336286 containerd[1969]: 2025-07-07 00:02:01.328 [INFO][4460] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:01.377727 containerd[1969]: time="2025-07-07T00:02:01.374303711Z" level=info msg="TearDown network for sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\" successfully" Jul 7 00:02:01.377727 containerd[1969]: time="2025-07-07T00:02:01.374353572Z" level=info msg="StopPodSandbox for \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\" returns successfully" Jul 7 00:02:01.345458 systemd[1]: run-netns-cni\x2d9811d574\x2dd24b\x2d712d\x2d9895\x2d35bceac1bfcc.mount: Deactivated successfully. Jul 7 00:02:01.590257 kubelet[3164]: I0707 00:02:01.590044 3164 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlg77\" (UniqueName: \"kubernetes.io/projected/6f08f069-babf-4598-9d97-921e74c663ed-kube-api-access-tlg77\") pod \"6f08f069-babf-4598-9d97-921e74c663ed\" (UID: \"6f08f069-babf-4598-9d97-921e74c663ed\") " Jul 7 00:02:01.632158 kubelet[3164]: I0707 00:02:01.630043 3164 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f08f069-babf-4598-9d97-921e74c663ed-whisker-ca-bundle\") pod \"6f08f069-babf-4598-9d97-921e74c663ed\" (UID: \"6f08f069-babf-4598-9d97-921e74c663ed\") " Jul 7 00:02:01.632158 kubelet[3164]: I0707 00:02:01.630117 3164 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f08f069-babf-4598-9d97-921e74c663ed-whisker-backend-key-pair\") pod \"6f08f069-babf-4598-9d97-921e74c663ed\" (UID: \"6f08f069-babf-4598-9d97-921e74c663ed\") " Jul 7 00:02:01.658963 kubelet[3164]: I0707 00:02:01.655351 3164 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f08f069-babf-4598-9d97-921e74c663ed-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6f08f069-babf-4598-9d97-921e74c663ed" (UID: "6f08f069-babf-4598-9d97-921e74c663ed"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:02:01.708433 kubelet[3164]: I0707 00:02:01.707415 3164 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f08f069-babf-4598-9d97-921e74c663ed-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6f08f069-babf-4598-9d97-921e74c663ed" (UID: "6f08f069-babf-4598-9d97-921e74c663ed"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:02:01.709456 kubelet[3164]: I0707 00:02:01.709367 3164 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f08f069-babf-4598-9d97-921e74c663ed-kube-api-access-tlg77" (OuterVolumeSpecName: "kube-api-access-tlg77") pod "6f08f069-babf-4598-9d97-921e74c663ed" (UID: "6f08f069-babf-4598-9d97-921e74c663ed"). InnerVolumeSpecName "kube-api-access-tlg77". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:02:01.723656 systemd[1]: var-lib-kubelet-pods-6f08f069\x2dbabf\x2d4598\x2d9d97\x2d921e74c663ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtlg77.mount: Deactivated successfully. Jul 7 00:02:01.723814 systemd[1]: var-lib-kubelet-pods-6f08f069\x2dbabf\x2d4598\x2d9d97\x2d921e74c663ed-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:02:01.739112 kubelet[3164]: I0707 00:02:01.739061 3164 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tlg77\" (UniqueName: \"kubernetes.io/projected/6f08f069-babf-4598-9d97-921e74c663ed-kube-api-access-tlg77\") on node \"ip-172-31-20-165\" DevicePath \"\"" Jul 7 00:02:01.739306 kubelet[3164]: I0707 00:02:01.739151 3164 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f08f069-babf-4598-9d97-921e74c663ed-whisker-ca-bundle\") on node \"ip-172-31-20-165\" DevicePath \"\"" Jul 7 00:02:01.739306 kubelet[3164]: I0707 00:02:01.739167 3164 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f08f069-babf-4598-9d97-921e74c663ed-whisker-backend-key-pair\") on node \"ip-172-31-20-165\" DevicePath \"\"" Jul 7 00:02:02.227559 systemd[1]: Removed slice kubepods-besteffort-pod6f08f069_babf_4598_9d97_921e74c663ed.slice - libcontainer container kubepods-besteffort-pod6f08f069_babf_4598_9d97_921e74c663ed.slice. Jul 7 00:02:02.679341 kubelet[3164]: I0707 00:02:02.679298 3164 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f08f069-babf-4598-9d97-921e74c663ed" path="/var/lib/kubelet/pods/6f08f069-babf-4598-9d97-921e74c663ed/volumes" Jul 7 00:02:02.685625 systemd[1]: Created slice kubepods-besteffort-pod8e5330c2_65c6_43e8_ad2c_8b0670e27dae.slice - libcontainer container kubepods-besteffort-pod8e5330c2_65c6_43e8_ad2c_8b0670e27dae.slice. Jul 7 00:02:02.810689 kubelet[3164]: I0707 00:02:02.810639 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr9p9\" (UniqueName: \"kubernetes.io/projected/8e5330c2-65c6-43e8-ad2c-8b0670e27dae-kube-api-access-cr9p9\") pod \"whisker-7b4757d6fc-vvxrz\" (UID: \"8e5330c2-65c6-43e8-ad2c-8b0670e27dae\") " pod="calico-system/whisker-7b4757d6fc-vvxrz" Jul 7 00:02:02.810689 kubelet[3164]: I0707 00:02:02.810709 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e5330c2-65c6-43e8-ad2c-8b0670e27dae-whisker-ca-bundle\") pod \"whisker-7b4757d6fc-vvxrz\" (UID: \"8e5330c2-65c6-43e8-ad2c-8b0670e27dae\") " pod="calico-system/whisker-7b4757d6fc-vvxrz" Jul 7 00:02:02.810689 kubelet[3164]: I0707 00:02:02.810748 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8e5330c2-65c6-43e8-ad2c-8b0670e27dae-whisker-backend-key-pair\") pod \"whisker-7b4757d6fc-vvxrz\" (UID: \"8e5330c2-65c6-43e8-ad2c-8b0670e27dae\") " pod="calico-system/whisker-7b4757d6fc-vvxrz" Jul 7 00:02:02.999985 containerd[1969]: time="2025-07-07T00:02:02.999846196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b4757d6fc-vvxrz,Uid:8e5330c2-65c6-43e8-ad2c-8b0670e27dae,Namespace:calico-system,Attempt:0,}" Jul 7 00:02:03.333388 systemd-networkd[1882]: calidf6e089db00: Link UP Jul 7 00:02:03.335376 systemd-networkd[1882]: calidf6e089db00: Gained carrier Jul 7 00:02:03.337747 (udev-worker)[4609]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.098 [INFO][4588] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.111 [INFO][4588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0 whisker-7b4757d6fc- calico-system 8e5330c2-65c6-43e8-ad2c-8b0670e27dae 920 0 2025-07-07 00:02:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b4757d6fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-20-165 whisker-7b4757d6fc-vvxrz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidf6e089db00 [] [] }} ContainerID="5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" Namespace="calico-system" Pod="whisker-7b4757d6fc-vvxrz" WorkloadEndpoint="ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.111 [INFO][4588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" Namespace="calico-system" Pod="whisker-7b4757d6fc-vvxrz" WorkloadEndpoint="ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.185 [INFO][4599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" HandleID="k8s-pod-network.5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" Workload="ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.185 [INFO][4599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" HandleID="k8s-pod-network.5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" Workload="ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00062e560), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-165", "pod":"whisker-7b4757d6fc-vvxrz", "timestamp":"2025-07-07 00:02:03.185089812 +0000 UTC"}, Hostname:"ip-172-31-20-165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.185 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.185 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.185 [INFO][4599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-165' Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.219 [INFO][4599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" host="ip-172-31-20-165" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.241 [INFO][4599] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-165" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.251 [INFO][4599] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.255 [INFO][4599] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.266 [INFO][4599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.266 [INFO][4599] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" host="ip-172-31-20-165" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.272 [INFO][4599] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.284 [INFO][4599] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" host="ip-172-31-20-165" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.296 [INFO][4599] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.129/26] block=192.168.93.128/26 handle="k8s-pod-network.5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" host="ip-172-31-20-165" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.296 [INFO][4599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.129/26] handle="k8s-pod-network.5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" host="ip-172-31-20-165" Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.296 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:03.360100 containerd[1969]: 2025-07-07 00:02:03.296 [INFO][4599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.129/26] IPv6=[] ContainerID="5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" HandleID="k8s-pod-network.5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" Workload="ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0" Jul 7 00:02:03.361159 containerd[1969]: 2025-07-07 00:02:03.301 [INFO][4588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" Namespace="calico-system" Pod="whisker-7b4757d6fc-vvxrz" WorkloadEndpoint="ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0", GenerateName:"whisker-7b4757d6fc-", Namespace:"calico-system", SelfLink:"", UID:"8e5330c2-65c6-43e8-ad2c-8b0670e27dae", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b4757d6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"", Pod:"whisker-7b4757d6fc-vvxrz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf6e089db00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:03.361159 containerd[1969]: 2025-07-07 00:02:03.302 [INFO][4588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.129/32] ContainerID="5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" Namespace="calico-system" Pod="whisker-7b4757d6fc-vvxrz" WorkloadEndpoint="ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0" Jul 7 00:02:03.361159 containerd[1969]: 2025-07-07 00:02:03.302 [INFO][4588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf6e089db00 ContainerID="5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" Namespace="calico-system" Pod="whisker-7b4757d6fc-vvxrz" WorkloadEndpoint="ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0" Jul 7 00:02:03.361159 containerd[1969]: 2025-07-07 00:02:03.336 [INFO][4588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" Namespace="calico-system" Pod="whisker-7b4757d6fc-vvxrz" WorkloadEndpoint="ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0" Jul 7 00:02:03.361159 containerd[1969]: 2025-07-07 00:02:03.337 [INFO][4588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" Namespace="calico-system" Pod="whisker-7b4757d6fc-vvxrz" WorkloadEndpoint="ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0", GenerateName:"whisker-7b4757d6fc-", Namespace:"calico-system", SelfLink:"", UID:"8e5330c2-65c6-43e8-ad2c-8b0670e27dae", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b4757d6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc", Pod:"whisker-7b4757d6fc-vvxrz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf6e089db00", MAC:"f6:cd:aa:64:17:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:03.361159 containerd[1969]: 2025-07-07 00:02:03.355 [INFO][4588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc" Namespace="calico-system" Pod="whisker-7b4757d6fc-vvxrz" WorkloadEndpoint="ip--172--31--20--165-k8s-whisker--7b4757d6fc--vvxrz-eth0" Jul 7 00:02:03.412145 containerd[1969]: time="2025-07-07T00:02:03.412006036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:03.412498 containerd[1969]: time="2025-07-07T00:02:03.412193810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:03.412739 containerd[1969]: time="2025-07-07T00:02:03.412699881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:03.412873 containerd[1969]: time="2025-07-07T00:02:03.412820418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:03.442504 systemd[1]: Started cri-containerd-5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc.scope - libcontainer container 5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc. Jul 7 00:02:03.508767 containerd[1969]: time="2025-07-07T00:02:03.508732817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b4757d6fc-vvxrz,Uid:8e5330c2-65c6-43e8-ad2c-8b0670e27dae,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc\"" Jul 7 00:02:03.511089 containerd[1969]: time="2025-07-07T00:02:03.511044476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:02:04.630785 systemd-networkd[1882]: calidf6e089db00: Gained IPv6LL Jul 7 00:02:04.642915 containerd[1969]: time="2025-07-07T00:02:04.640470807Z" level=info msg="StopPodSandbox for \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\"" Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.718 [INFO][4685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.719 [INFO][4685] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" iface="eth0" netns="/var/run/netns/cni-c5dfb666-4ae1-f6cf-1d9a-489dae71a9d3" Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.722 [INFO][4685] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" iface="eth0" netns="/var/run/netns/cni-c5dfb666-4ae1-f6cf-1d9a-489dae71a9d3" Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.726 [INFO][4685] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" iface="eth0" netns="/var/run/netns/cni-c5dfb666-4ae1-f6cf-1d9a-489dae71a9d3" Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.726 [INFO][4685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.726 [INFO][4685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.770 [INFO][4696] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" HandleID="k8s-pod-network.adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.771 [INFO][4696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.771 [INFO][4696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.779 [WARNING][4696] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" HandleID="k8s-pod-network.adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.779 [INFO][4696] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" HandleID="k8s-pod-network.adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.783 [INFO][4696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:04.797517 containerd[1969]: 2025-07-07 00:02:04.791 [INFO][4685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:04.798842 containerd[1969]: time="2025-07-07T00:02:04.798655833Z" level=info msg="TearDown network for sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\" successfully" Jul 7 00:02:04.798842 containerd[1969]: time="2025-07-07T00:02:04.798699625Z" level=info msg="StopPodSandbox for \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\" returns successfully" Jul 7 00:02:04.802181 containerd[1969]: time="2025-07-07T00:02:04.802082770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f9b8bfdc-lcq8w,Uid:abc995fc-0743-4302-b622-a778795b2bb6,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:02:04.804482 systemd[1]: run-netns-cni\x2dc5dfb666\x2d4ae1\x2df6cf\x2d1d9a\x2d489dae71a9d3.mount: Deactivated successfully. Jul 7 00:02:04.935128 containerd[1969]: time="2025-07-07T00:02:04.934068135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:04.937137 containerd[1969]: time="2025-07-07T00:02:04.937093113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:02:04.940589 containerd[1969]: time="2025-07-07T00:02:04.940540215Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:04.947070 containerd[1969]: time="2025-07-07T00:02:04.947023029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:04.962888 containerd[1969]: time="2025-07-07T00:02:04.962446217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.451357415s" Jul 7 00:02:04.962888 containerd[1969]: time="2025-07-07T00:02:04.962514698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:02:04.981876 containerd[1969]: time="2025-07-07T00:02:04.981793010Z" level=info msg="CreateContainer within sandbox \"5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:02:05.017027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663271758.mount: Deactivated successfully. Jul 7 00:02:05.018395 containerd[1969]: time="2025-07-07T00:02:05.018318987Z" level=info msg="CreateContainer within sandbox \"5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ba3c2668affe3fd9865e4aff02425d6af2fc217dddaae8fb749c882c1b4e9a6b\"" Jul 7 00:02:05.019399 containerd[1969]: time="2025-07-07T00:02:05.019362789Z" level=info msg="StartContainer for \"ba3c2668affe3fd9865e4aff02425d6af2fc217dddaae8fb749c882c1b4e9a6b\"" Jul 7 00:02:05.081304 systemd[1]: run-containerd-runc-k8s.io-ba3c2668affe3fd9865e4aff02425d6af2fc217dddaae8fb749c882c1b4e9a6b-runc.pRvaZB.mount: Deactivated successfully. Jul 7 00:02:05.095635 systemd[1]: Started cri-containerd-ba3c2668affe3fd9865e4aff02425d6af2fc217dddaae8fb749c882c1b4e9a6b.scope - libcontainer container ba3c2668affe3fd9865e4aff02425d6af2fc217dddaae8fb749c882c1b4e9a6b. Jul 7 00:02:05.101290 systemd-networkd[1882]: cali8318b63015f: Link UP Jul 7 00:02:05.103414 systemd-networkd[1882]: cali8318b63015f: Gained carrier Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:04.902 [INFO][4704] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:04.920 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0 calico-apiserver-68f9b8bfdc- calico-apiserver abc995fc-0743-4302-b622-a778795b2bb6 930 0 2025-07-07 00:01:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68f9b8bfdc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-165 calico-apiserver-68f9b8bfdc-lcq8w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8318b63015f [] [] }} ContainerID="d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-lcq8w" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:04.920 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-lcq8w" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:04.969 [INFO][4716] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" HandleID="k8s-pod-network.d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:04.971 [INFO][4716] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" HandleID="k8s-pod-network.d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-165", "pod":"calico-apiserver-68f9b8bfdc-lcq8w", "timestamp":"2025-07-07 00:02:04.968735735 +0000 UTC"}, Hostname:"ip-172-31-20-165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:04.971 [INFO][4716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:04.971 [INFO][4716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:04.971 [INFO][4716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-165' Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:04.984 [INFO][4716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" host="ip-172-31-20-165" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:04.991 [INFO][4716] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-165" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:05.000 [INFO][4716] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:05.014 [INFO][4716] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:05.027 [INFO][4716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:05.028 [INFO][4716] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" host="ip-172-31-20-165" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:05.032 [INFO][4716] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264 Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:05.048 [INFO][4716] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" host="ip-172-31-20-165" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:05.068 [INFO][4716] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.130/26] block=192.168.93.128/26 handle="k8s-pod-network.d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" host="ip-172-31-20-165" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:05.068 [INFO][4716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.130/26] handle="k8s-pod-network.d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" host="ip-172-31-20-165" Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:05.068 [INFO][4716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:05.135021 containerd[1969]: 2025-07-07 00:02:05.068 [INFO][4716] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.130/26] IPv6=[] ContainerID="d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" HandleID="k8s-pod-network.d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:05.136090 containerd[1969]: 2025-07-07 00:02:05.087 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-lcq8w" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0", GenerateName:"calico-apiserver-68f9b8bfdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"abc995fc-0743-4302-b622-a778795b2bb6", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f9b8bfdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"", Pod:"calico-apiserver-68f9b8bfdc-lcq8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8318b63015f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:05.136090 containerd[1969]: 2025-07-07 00:02:05.089 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.130/32] ContainerID="d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-lcq8w" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:05.136090 containerd[1969]: 2025-07-07 00:02:05.090 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8318b63015f ContainerID="d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-lcq8w" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:05.136090 containerd[1969]: 2025-07-07 00:02:05.102 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-lcq8w" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:05.136090 containerd[1969]: 2025-07-07 00:02:05.103 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-lcq8w" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0", GenerateName:"calico-apiserver-68f9b8bfdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"abc995fc-0743-4302-b622-a778795b2bb6", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f9b8bfdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264", Pod:"calico-apiserver-68f9b8bfdc-lcq8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8318b63015f", MAC:"22:3f:0e:e9:2f:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:05.136090 containerd[1969]: 2025-07-07 00:02:05.123 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-lcq8w" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:05.198746 containerd[1969]: time="2025-07-07T00:02:05.198526125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:05.198872 containerd[1969]: time="2025-07-07T00:02:05.198686582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:05.200856 containerd[1969]: time="2025-07-07T00:02:05.198873029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:05.202074 containerd[1969]: time="2025-07-07T00:02:05.201976345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:05.253934 systemd[1]: Started cri-containerd-d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264.scope - libcontainer container d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264. Jul 7 00:02:05.279654 containerd[1969]: time="2025-07-07T00:02:05.279600969Z" level=info msg="StartContainer for \"ba3c2668affe3fd9865e4aff02425d6af2fc217dddaae8fb749c882c1b4e9a6b\" returns successfully" Jul 7 00:02:05.281897 containerd[1969]: time="2025-07-07T00:02:05.281853424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:02:05.364624 containerd[1969]: time="2025-07-07T00:02:05.364580175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f9b8bfdc-lcq8w,Uid:abc995fc-0743-4302-b622-a778795b2bb6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264\"" Jul 7 00:02:05.634746 containerd[1969]: time="2025-07-07T00:02:05.633093918Z" level=info msg="StopPodSandbox for \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\"" Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.705 [INFO][4838] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.705 [INFO][4838] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" iface="eth0" netns="/var/run/netns/cni-ab93a2b5-924a-9b6a-8569-cb3b5d740f7a" Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.706 [INFO][4838] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" iface="eth0" netns="/var/run/netns/cni-ab93a2b5-924a-9b6a-8569-cb3b5d740f7a" Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.706 [INFO][4838] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" iface="eth0" netns="/var/run/netns/cni-ab93a2b5-924a-9b6a-8569-cb3b5d740f7a" Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.706 [INFO][4838] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.706 [INFO][4838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.732 [INFO][4845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" HandleID="k8s-pod-network.57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.732 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.732 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.739 [WARNING][4845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" HandleID="k8s-pod-network.57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.740 [INFO][4845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" HandleID="k8s-pod-network.57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.744 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:05.751155 containerd[1969]: 2025-07-07 00:02:05.748 [INFO][4838] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:05.752427 containerd[1969]: time="2025-07-07T00:02:05.751397955Z" level=info msg="TearDown network for sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\" successfully" Jul 7 00:02:05.752427 containerd[1969]: time="2025-07-07T00:02:05.751430640Z" level=info msg="StopPodSandbox for \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\" returns successfully" Jul 7 00:02:05.753112 containerd[1969]: time="2025-07-07T00:02:05.752567059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prlf8,Uid:15afd7c1-8f7a-46ff-bed1-960e12f70585,Namespace:calico-system,Attempt:1,}" Jul 7 00:02:05.903415 systemd-networkd[1882]: cali4642b6fa2c4: Link UP Jul 7 00:02:05.903710 systemd-networkd[1882]: cali4642b6fa2c4: Gained carrier Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.799 [INFO][4853] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.811 [INFO][4853] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0 csi-node-driver- calico-system 15afd7c1-8f7a-46ff-bed1-960e12f70585 942 0 2025-07-07 00:01:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-165 csi-node-driver-prlf8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4642b6fa2c4 [] [] }} ContainerID="021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" Namespace="calico-system" Pod="csi-node-driver-prlf8" WorkloadEndpoint="ip--172--31--20--165-k8s-csi--node--driver--prlf8-" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.811 [INFO][4853] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" Namespace="calico-system" Pod="csi-node-driver-prlf8" WorkloadEndpoint="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.845 [INFO][4865] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" HandleID="k8s-pod-network.021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.846 [INFO][4865] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" HandleID="k8s-pod-network.021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-165", "pod":"csi-node-driver-prlf8", "timestamp":"2025-07-07 00:02:05.84574815 +0000 UTC"}, Hostname:"ip-172-31-20-165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.846 [INFO][4865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.846 [INFO][4865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.846 [INFO][4865] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-165' Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.853 [INFO][4865] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" host="ip-172-31-20-165" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.859 [INFO][4865] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-165" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.867 [INFO][4865] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.869 [INFO][4865] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.872 [INFO][4865] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.873 [INFO][4865] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" host="ip-172-31-20-165" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.876 [INFO][4865] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.883 [INFO][4865] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" host="ip-172-31-20-165" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.893 [INFO][4865] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.131/26] block=192.168.93.128/26 handle="k8s-pod-network.021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" host="ip-172-31-20-165" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.894 [INFO][4865] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.131/26] handle="k8s-pod-network.021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" host="ip-172-31-20-165" Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.894 [INFO][4865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:05.928417 containerd[1969]: 2025-07-07 00:02:05.894 [INFO][4865] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.131/26] IPv6=[] ContainerID="021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" HandleID="k8s-pod-network.021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:05.929773 containerd[1969]: 2025-07-07 00:02:05.897 [INFO][4853] cni-plugin/k8s.go 418: Populated endpoint ContainerID="021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" Namespace="calico-system" Pod="csi-node-driver-prlf8" WorkloadEndpoint="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15afd7c1-8f7a-46ff-bed1-960e12f70585", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"", Pod:"csi-node-driver-prlf8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4642b6fa2c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:05.929773 containerd[1969]: 2025-07-07 00:02:05.898 [INFO][4853] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.131/32] ContainerID="021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" Namespace="calico-system" Pod="csi-node-driver-prlf8" WorkloadEndpoint="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:05.929773 containerd[1969]: 2025-07-07 00:02:05.898 [INFO][4853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4642b6fa2c4 ContainerID="021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" Namespace="calico-system" Pod="csi-node-driver-prlf8" WorkloadEndpoint="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:05.929773 containerd[1969]: 2025-07-07 00:02:05.901 [INFO][4853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" Namespace="calico-system" Pod="csi-node-driver-prlf8" WorkloadEndpoint="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:05.929773 containerd[1969]: 2025-07-07 00:02:05.901 [INFO][4853] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" Namespace="calico-system" Pod="csi-node-driver-prlf8" WorkloadEndpoint="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15afd7c1-8f7a-46ff-bed1-960e12f70585", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b", Pod:"csi-node-driver-prlf8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4642b6fa2c4", MAC:"96:81:eb:a1:d3:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:05.929773 containerd[1969]: 2025-07-07 00:02:05.924 [INFO][4853] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b" Namespace="calico-system" Pod="csi-node-driver-prlf8" WorkloadEndpoint="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:05.960993 containerd[1969]: time="2025-07-07T00:02:05.960394815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:05.960993 containerd[1969]: time="2025-07-07T00:02:05.960842066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:05.961612 containerd[1969]: time="2025-07-07T00:02:05.961403768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:05.962148 containerd[1969]: time="2025-07-07T00:02:05.962053080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:05.992500 systemd[1]: Started cri-containerd-021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b.scope - libcontainer container 021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b. Jul 7 00:02:06.011692 systemd[1]: run-netns-cni\x2dab93a2b5\x2d924a\x2d9b6a\x2d8569\x2dcb3b5d740f7a.mount: Deactivated successfully. Jul 7 00:02:06.044306 containerd[1969]: time="2025-07-07T00:02:06.044183850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prlf8,Uid:15afd7c1-8f7a-46ff-bed1-960e12f70585,Namespace:calico-system,Attempt:1,} returns sandbox id \"021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b\"" Jul 7 00:02:06.486662 systemd-networkd[1882]: cali8318b63015f: Gained IPv6LL Jul 7 00:02:06.639911 containerd[1969]: time="2025-07-07T00:02:06.636879175Z" level=info msg="StopPodSandbox for \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\"" Jul 7 00:02:06.648024 containerd[1969]: time="2025-07-07T00:02:06.647976923Z" level=info msg="StopPodSandbox for \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\"" Jul 7 00:02:06.663299 containerd[1969]: time="2025-07-07T00:02:06.662122851Z" level=info msg="StopPodSandbox for \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\"" Jul 7 00:02:06.981525 systemd[1]: Started sshd@7-172.31.20.165:22-147.75.109.163:34652.service - OpenSSH per-connection server daemon (147.75.109.163:34652). Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.045 [INFO][4971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.049 [INFO][4971] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" iface="eth0" netns="/var/run/netns/cni-9159ed8e-95ec-a80d-4dae-41b5b0f0eeda" Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.055 [INFO][4971] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" iface="eth0" netns="/var/run/netns/cni-9159ed8e-95ec-a80d-4dae-41b5b0f0eeda" Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.059 [INFO][4971] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" iface="eth0" netns="/var/run/netns/cni-9159ed8e-95ec-a80d-4dae-41b5b0f0eeda" Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.059 [INFO][4971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.059 [INFO][4971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.240 [INFO][4993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" HandleID="k8s-pod-network.a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.243 [INFO][4993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.243 [INFO][4993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.252 [WARNING][4993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" HandleID="k8s-pod-network.a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.253 [INFO][4993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" HandleID="k8s-pod-network.a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.261 [INFO][4993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:07.280703 containerd[1969]: 2025-07-07 00:02:07.267 [INFO][4971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:07.287787 containerd[1969]: time="2025-07-07T00:02:07.280777498Z" level=info msg="TearDown network for sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\" successfully" Jul 7 00:02:07.287787 containerd[1969]: time="2025-07-07T00:02:07.280810957Z" level=info msg="StopPodSandbox for \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\" returns successfully" Jul 7 00:02:07.287787 containerd[1969]: time="2025-07-07T00:02:07.282510900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bf6gq,Uid:4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15,Namespace:kube-system,Attempt:1,}" Jul 7 00:02:07.287945 sshd[4983]: Accepted publickey for core from 147.75.109.163 port 34652 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:07.290337 systemd[1]: run-netns-cni\x2d9159ed8e\x2d95ec\x2da80d\x2d4dae\x2d41b5b0f0eeda.mount: Deactivated successfully. Jul 7 00:02:07.291560 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:07.308133 systemd-logind[1958]: New session 8 of user core. Jul 7 00:02:07.311468 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.028 [INFO][4955] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.029 [INFO][4955] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" iface="eth0" netns="/var/run/netns/cni-59b9f9fc-3407-6c57-d945-5efe2d35abbf" Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.029 [INFO][4955] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" iface="eth0" netns="/var/run/netns/cni-59b9f9fc-3407-6c57-d945-5efe2d35abbf" Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.033 [INFO][4955] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" iface="eth0" netns="/var/run/netns/cni-59b9f9fc-3407-6c57-d945-5efe2d35abbf" Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.033 [INFO][4955] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.033 [INFO][4955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.243 [INFO][4986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" HandleID="k8s-pod-network.8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.247 [INFO][4986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.261 [INFO][4986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.290 [WARNING][4986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" HandleID="k8s-pod-network.8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.291 [INFO][4986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" HandleID="k8s-pod-network.8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.318 [INFO][4986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:07.375933 containerd[1969]: 2025-07-07 00:02:07.345 [INFO][4955] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:07.379884 containerd[1969]: time="2025-07-07T00:02:07.377779368Z" level=info msg="TearDown network for sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\" successfully" Jul 7 00:02:07.379884 containerd[1969]: time="2025-07-07T00:02:07.377829370Z" level=info msg="StopPodSandbox for \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\" returns successfully" Jul 7 00:02:07.384311 containerd[1969]: time="2025-07-07T00:02:07.382985882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d9458bfc6-g2s8x,Uid:323f6ecf-2b19-407b-9a25-f503af5af82a,Namespace:calico-system,Attempt:1,}" Jul 7 00:02:07.383361 systemd-networkd[1882]: cali4642b6fa2c4: Gained IPv6LL Jul 7 00:02:07.384942 systemd[1]: run-netns-cni\x2d59b9f9fc\x2d3407\x2d6c57\x2dd945\x2d5efe2d35abbf.mount: Deactivated successfully. Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.052 [INFO][4967] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.053 [INFO][4967] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" iface="eth0" netns="/var/run/netns/cni-7f820ef1-f99a-b07d-5aa6-b53172df6684" Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.057 [INFO][4967] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" iface="eth0" netns="/var/run/netns/cni-7f820ef1-f99a-b07d-5aa6-b53172df6684" Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.058 [INFO][4967] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" iface="eth0" netns="/var/run/netns/cni-7f820ef1-f99a-b07d-5aa6-b53172df6684" Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.058 [INFO][4967] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.058 [INFO][4967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.253 [INFO][4991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" HandleID="k8s-pod-network.a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.253 [INFO][4991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.314 [INFO][4991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.364 [WARNING][4991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" HandleID="k8s-pod-network.a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.364 [INFO][4991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" HandleID="k8s-pod-network.a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.368 [INFO][4991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:07.393664 containerd[1969]: 2025-07-07 00:02:07.373 [INFO][4967] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:07.395711 containerd[1969]: time="2025-07-07T00:02:07.395335108Z" level=info msg="TearDown network for sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\" successfully" Jul 7 00:02:07.395711 containerd[1969]: time="2025-07-07T00:02:07.395393153Z" level=info msg="StopPodSandbox for \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\" returns successfully" Jul 7 00:02:07.400007 containerd[1969]: time="2025-07-07T00:02:07.399954829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f9b8bfdc-qtjjz,Uid:882b3fc4-2da7-43e6-941f-358a9356321e,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:02:07.631761 containerd[1969]: time="2025-07-07T00:02:07.630505654Z" level=info msg="StopPodSandbox for \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\"" Jul 7 00:02:07.632966 containerd[1969]: time="2025-07-07T00:02:07.632593684Z" level=info msg="StopPodSandbox for \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\"" Jul 7 00:02:07.992144 systemd-networkd[1882]: calie1f59c638b5: Link UP Jul 7 00:02:07.996091 systemd-networkd[1882]: calie1f59c638b5: Gained carrier Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.438 [INFO][5017] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.470 [INFO][5017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0 coredns-668d6bf9bc- kube-system 4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15 987 0 2025-07-07 00:01:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-165 coredns-668d6bf9bc-bf6gq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie1f59c638b5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" Namespace="kube-system" Pod="coredns-668d6bf9bc-bf6gq" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.470 [INFO][5017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" Namespace="kube-system" Pod="coredns-668d6bf9bc-bf6gq" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.692 [INFO][5044] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" HandleID="k8s-pod-network.6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.711 [INFO][5044] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" HandleID="k8s-pod-network.6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4e60), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-165", "pod":"coredns-668d6bf9bc-bf6gq", "timestamp":"2025-07-07 00:02:07.692140557 +0000 UTC"}, Hostname:"ip-172-31-20-165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.711 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.711 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.712 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-165' Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.741 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" host="ip-172-31-20-165" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.773 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-165" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.851 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.868 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.879 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.881 [INFO][5044] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" host="ip-172-31-20-165" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.904 [INFO][5044] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253 Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.920 [INFO][5044] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" host="ip-172-31-20-165" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.946 [INFO][5044] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.132/26] block=192.168.93.128/26 handle="k8s-pod-network.6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" host="ip-172-31-20-165" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.949 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.132/26] handle="k8s-pod-network.6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" host="ip-172-31-20-165" Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.950 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:08.127037 containerd[1969]: 2025-07-07 00:02:07.950 [INFO][5044] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.132/26] IPv6=[] ContainerID="6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" HandleID="k8s-pod-network.6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:08.128202 containerd[1969]: 2025-07-07 00:02:07.964 [INFO][5017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" Namespace="kube-system" Pod="coredns-668d6bf9bc-bf6gq" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"", Pod:"coredns-668d6bf9bc-bf6gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1f59c638b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:08.128202 containerd[1969]: 2025-07-07 00:02:07.975 [INFO][5017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.132/32] ContainerID="6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" Namespace="kube-system" Pod="coredns-668d6bf9bc-bf6gq" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:08.128202 containerd[1969]: 2025-07-07 00:02:07.977 [INFO][5017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1f59c638b5 ContainerID="6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" Namespace="kube-system" Pod="coredns-668d6bf9bc-bf6gq" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:08.128202 containerd[1969]: 2025-07-07 00:02:07.996 [INFO][5017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" Namespace="kube-system" Pod="coredns-668d6bf9bc-bf6gq" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:08.128202 containerd[1969]: 2025-07-07 00:02:08.009 [INFO][5017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" Namespace="kube-system" Pod="coredns-668d6bf9bc-bf6gq" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253", Pod:"coredns-668d6bf9bc-bf6gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1f59c638b5", MAC:"da:d4:c4:86:07:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:08.128202 containerd[1969]: 2025-07-07 00:02:08.083 [INFO][5017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253" Namespace="kube-system" Pod="coredns-668d6bf9bc-bf6gq" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:08.169270 systemd[1]: run-netns-cni\x2d7f820ef1\x2df99a\x2db07d\x2d5aa6\x2db53172df6684.mount: Deactivated successfully. Jul 7 00:02:08.362936 systemd-networkd[1882]: cali3f0ef2e10e0: Link UP Jul 7 00:02:08.382799 systemd-networkd[1882]: cali3f0ef2e10e0: Gained carrier Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:07.607 [INFO][5031] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:07.707 [INFO][5031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0 calico-kube-controllers-7d9458bfc6- calico-system 323f6ecf-2b19-407b-9a25-f503af5af82a 985 0 2025-07-07 00:01:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d9458bfc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-165 calico-kube-controllers-7d9458bfc6-g2s8x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3f0ef2e10e0 [] [] }} ContainerID="a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" Namespace="calico-system" Pod="calico-kube-controllers-7d9458bfc6-g2s8x" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:07.707 [INFO][5031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" Namespace="calico-system" Pod="calico-kube-controllers-7d9458bfc6-g2s8x" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:07.955 [INFO][5094] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" HandleID="k8s-pod-network.a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:07.958 [INFO][5094] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" HandleID="k8s-pod-network.a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003320d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-165", "pod":"calico-kube-controllers-7d9458bfc6-g2s8x", "timestamp":"2025-07-07 00:02:07.955933731 +0000 UTC"}, Hostname:"ip-172-31-20-165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:07.964 [INFO][5094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:07.964 [INFO][5094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:07.964 [INFO][5094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-165' Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.021 [INFO][5094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" host="ip-172-31-20-165" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.121 [INFO][5094] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-165" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.168 [INFO][5094] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.184 [INFO][5094] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.204 [INFO][5094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.205 [INFO][5094] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" host="ip-172-31-20-165" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.217 [INFO][5094] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5 Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.231 [INFO][5094] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" host="ip-172-31-20-165" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.267 [INFO][5094] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.133/26] block=192.168.93.128/26 handle="k8s-pod-network.a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" host="ip-172-31-20-165" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.269 [INFO][5094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.133/26] handle="k8s-pod-network.a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" host="ip-172-31-20-165" Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.269 [INFO][5094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:08.443192 containerd[1969]: 2025-07-07 00:02:08.269 [INFO][5094] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.133/26] IPv6=[] ContainerID="a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" HandleID="k8s-pod-network.a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:08.448623 containerd[1969]: 2025-07-07 00:02:08.298 [INFO][5031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" Namespace="calico-system" Pod="calico-kube-controllers-7d9458bfc6-g2s8x" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0", GenerateName:"calico-kube-controllers-7d9458bfc6-", Namespace:"calico-system", SelfLink:"", UID:"323f6ecf-2b19-407b-9a25-f503af5af82a", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d9458bfc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"", Pod:"calico-kube-controllers-7d9458bfc6-g2s8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f0ef2e10e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:08.448623 containerd[1969]: 2025-07-07 00:02:08.304 [INFO][5031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.133/32] ContainerID="a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" Namespace="calico-system" Pod="calico-kube-controllers-7d9458bfc6-g2s8x" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:08.448623 containerd[1969]: 2025-07-07 00:02:08.304 [INFO][5031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f0ef2e10e0 ContainerID="a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" Namespace="calico-system" Pod="calico-kube-controllers-7d9458bfc6-g2s8x" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:08.448623 containerd[1969]: 2025-07-07 00:02:08.402 [INFO][5031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" Namespace="calico-system" Pod="calico-kube-controllers-7d9458bfc6-g2s8x" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:08.448623 containerd[1969]: 2025-07-07 00:02:08.406 [INFO][5031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" Namespace="calico-system" Pod="calico-kube-controllers-7d9458bfc6-g2s8x" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0", GenerateName:"calico-kube-controllers-7d9458bfc6-", Namespace:"calico-system", SelfLink:"", UID:"323f6ecf-2b19-407b-9a25-f503af5af82a", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d9458bfc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5", Pod:"calico-kube-controllers-7d9458bfc6-g2s8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f0ef2e10e0", MAC:"26:d3:f1:de:58:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:08.448623 containerd[1969]: 2025-07-07 00:02:08.428 [INFO][5031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5" Namespace="calico-system" Pod="calico-kube-controllers-7d9458bfc6-g2s8x" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:08.448623 containerd[1969]: time="2025-07-07T00:02:08.445505036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:08.448623 containerd[1969]: time="2025-07-07T00:02:08.445594567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:08.448623 containerd[1969]: time="2025-07-07T00:02:08.445631444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:08.461264 containerd[1969]: time="2025-07-07T00:02:08.453847834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:08.569442 containerd[1969]: time="2025-07-07T00:02:08.568602753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:08.569442 containerd[1969]: time="2025-07-07T00:02:08.569316336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:08.569442 containerd[1969]: time="2025-07-07T00:02:08.569349766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:08.576700 containerd[1969]: time="2025-07-07T00:02:08.576575724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:08.650426 systemd[1]: run-containerd-runc-k8s.io-6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253-runc.zvb7E9.mount: Deactivated successfully. Jul 7 00:02:08.656024 sshd[4983]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:08.660470 systemd[1]: Started cri-containerd-6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253.scope - libcontainer container 6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253. Jul 7 00:02:08.673014 systemd[1]: sshd@7-172.31.20.165:22-147.75.109.163:34652.service: Deactivated successfully. Jul 7 00:02:08.681221 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:02:08.685544 systemd-logind[1958]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:02:08.692337 systemd-logind[1958]: Removed session 8. Jul 7 00:02:08.721785 systemd[1]: Started cri-containerd-a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5.scope - libcontainer container a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5. Jul 7 00:02:08.800788 systemd-networkd[1882]: cali95a79baf32e: Link UP Jul 7 00:02:08.802632 systemd-networkd[1882]: cali95a79baf32e: Gained carrier Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.092 [INFO][5084] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.092 [INFO][5084] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" iface="eth0" netns="/var/run/netns/cni-d04468da-16fc-6f75-e54c-cbf2e88108a0" Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.098 [INFO][5084] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" iface="eth0" netns="/var/run/netns/cni-d04468da-16fc-6f75-e54c-cbf2e88108a0" Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.103 [INFO][5084] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" iface="eth0" netns="/var/run/netns/cni-d04468da-16fc-6f75-e54c-cbf2e88108a0" Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.104 [INFO][5084] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.104 [INFO][5084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.450 [INFO][5120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" HandleID="k8s-pod-network.be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.458 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.770 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.815 [WARNING][5120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" HandleID="k8s-pod-network.be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.815 [INFO][5120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" HandleID="k8s-pod-network.be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.845 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:08.861601 containerd[1969]: 2025-07-07 00:02:08.855 [INFO][5084] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:08.869274 containerd[1969]: time="2025-07-07T00:02:08.867756327Z" level=info msg="TearDown network for sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\" successfully" Jul 7 00:02:08.869274 containerd[1969]: time="2025-07-07T00:02:08.867797116Z" level=info msg="StopPodSandbox for \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\" returns successfully" Jul 7 00:02:08.874727 containerd[1969]: time="2025-07-07T00:02:08.871506150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8f2qq,Uid:3424677f-768b-40b5-b2e4-d681424e64e2,Namespace:calico-system,Attempt:1,}" Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.227 [INFO][5083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.234 [INFO][5083] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" iface="eth0" netns="/var/run/netns/cni-0aa56156-c2e9-00a2-1b79-36741ee3569d" Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.235 [INFO][5083] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" iface="eth0" netns="/var/run/netns/cni-0aa56156-c2e9-00a2-1b79-36741ee3569d" Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.239 [INFO][5083] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" iface="eth0" netns="/var/run/netns/cni-0aa56156-c2e9-00a2-1b79-36741ee3569d" Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.239 [INFO][5083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.239 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.735 [INFO][5136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" HandleID="k8s-pod-network.f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.740 [INFO][5136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.846 [INFO][5136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.860 [WARNING][5136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" HandleID="k8s-pod-network.f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.862 [INFO][5136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" HandleID="k8s-pod-network.f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.866 [INFO][5136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:08.894354 containerd[1969]: 2025-07-07 00:02:08.880 [INFO][5083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:08.908647 containerd[1969]: time="2025-07-07T00:02:08.905293097Z" level=info msg="TearDown network for sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\" successfully" Jul 7 00:02:08.908647 containerd[1969]: time="2025-07-07T00:02:08.906279728Z" level=info msg="StopPodSandbox for \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\" returns successfully" Jul 7 00:02:08.913057 containerd[1969]: time="2025-07-07T00:02:08.913008457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mzs6t,Uid:06399cd4-39d2-47a4-bc67-182674acacc1,Namespace:kube-system,Attempt:1,}" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:07.596 [INFO][5036] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:07.703 [INFO][5036] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0 calico-apiserver-68f9b8bfdc- calico-apiserver 882b3fc4-2da7-43e6-941f-358a9356321e 988 0 2025-07-07 00:01:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68f9b8bfdc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-165 calico-apiserver-68f9b8bfdc-qtjjz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali95a79baf32e [] [] }} ContainerID="b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-qtjjz" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:07.703 [INFO][5036] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-qtjjz" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.047 [INFO][5096] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" HandleID="k8s-pod-network.b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.047 [INFO][5096] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" HandleID="k8s-pod-network.b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102310), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-165", "pod":"calico-apiserver-68f9b8bfdc-qtjjz", "timestamp":"2025-07-07 00:02:08.04759288 +0000 UTC"}, Hostname:"ip-172-31-20-165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.047 [INFO][5096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.280 [INFO][5096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.280 [INFO][5096] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-165' Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.354 [INFO][5096] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" host="ip-172-31-20-165" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.432 [INFO][5096] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-165" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.508 [INFO][5096] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.541 [INFO][5096] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.599 [INFO][5096] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.600 [INFO][5096] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" host="ip-172-31-20-165" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.628 [INFO][5096] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.709 [INFO][5096] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" host="ip-172-31-20-165" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.768 [INFO][5096] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.134/26] block=192.168.93.128/26 handle="k8s-pod-network.b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" host="ip-172-31-20-165" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.768 [INFO][5096] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.134/26] handle="k8s-pod-network.b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" host="ip-172-31-20-165" Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.768 [INFO][5096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:08.943113 containerd[1969]: 2025-07-07 00:02:08.768 [INFO][5096] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.134/26] IPv6=[] ContainerID="b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" HandleID="k8s-pod-network.b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:08.950374 containerd[1969]: 2025-07-07 00:02:08.787 [INFO][5036] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-qtjjz" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0", GenerateName:"calico-apiserver-68f9b8bfdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"882b3fc4-2da7-43e6-941f-358a9356321e", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f9b8bfdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"", Pod:"calico-apiserver-68f9b8bfdc-qtjjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95a79baf32e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:08.950374 containerd[1969]: 2025-07-07 00:02:08.790 [INFO][5036] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.134/32] ContainerID="b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-qtjjz" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:08.950374 containerd[1969]: 2025-07-07 00:02:08.792 [INFO][5036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95a79baf32e ContainerID="b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-qtjjz" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:08.950374 containerd[1969]: 2025-07-07 00:02:08.811 [INFO][5036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-qtjjz" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:08.950374 containerd[1969]: 2025-07-07 00:02:08.819 [INFO][5036] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-qtjjz" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0", GenerateName:"calico-apiserver-68f9b8bfdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"882b3fc4-2da7-43e6-941f-358a9356321e", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f9b8bfdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f", Pod:"calico-apiserver-68f9b8bfdc-qtjjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95a79baf32e", MAC:"42:5a:2e:3c:e6:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:08.950374 containerd[1969]: 2025-07-07 00:02:08.907 [INFO][5036] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f" Namespace="calico-apiserver" Pod="calico-apiserver-68f9b8bfdc-qtjjz" WorkloadEndpoint="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:09.036983 containerd[1969]: time="2025-07-07T00:02:09.036918460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bf6gq,Uid:4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15,Namespace:kube-system,Attempt:1,} returns sandbox id \"6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253\"" Jul 7 00:02:09.051885 containerd[1969]: time="2025-07-07T00:02:09.051758030Z" level=info msg="CreateContainer within sandbox \"6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:02:09.152095 containerd[1969]: time="2025-07-07T00:02:09.151996786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:09.152755 containerd[1969]: time="2025-07-07T00:02:09.152616464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:09.153079 containerd[1969]: time="2025-07-07T00:02:09.153014823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:09.160780 containerd[1969]: time="2025-07-07T00:02:09.159203500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:09.167008 systemd[1]: run-netns-cni\x2dd04468da\x2d16fc\x2d6f75\x2de54c\x2dcbf2e88108a0.mount: Deactivated successfully. Jul 7 00:02:09.167145 systemd[1]: run-netns-cni\x2d0aa56156\x2dc2e9\x2d00a2\x2d1b79\x2d36741ee3569d.mount: Deactivated successfully. Jul 7 00:02:09.228485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786133539.mount: Deactivated successfully. Jul 7 00:02:09.271558 containerd[1969]: time="2025-07-07T00:02:09.270824844Z" level=info msg="CreateContainer within sandbox \"6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30f15103e0b6120538bde60fc9f52081d934949611f2de47e3e9fcf35dadc38c\"" Jul 7 00:02:09.274692 containerd[1969]: time="2025-07-07T00:02:09.274356040Z" level=info msg="StartContainer for \"30f15103e0b6120538bde60fc9f52081d934949611f2de47e3e9fcf35dadc38c\"" Jul 7 00:02:09.297658 systemd[1]: Started cri-containerd-b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f.scope - libcontainer container b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f. Jul 7 00:02:09.462730 systemd[1]: Started cri-containerd-30f15103e0b6120538bde60fc9f52081d934949611f2de47e3e9fcf35dadc38c.scope - libcontainer container 30f15103e0b6120538bde60fc9f52081d934949611f2de47e3e9fcf35dadc38c. Jul 7 00:02:09.493484 systemd-networkd[1882]: cali3f0ef2e10e0: Gained IPv6LL Jul 7 00:02:09.607917 containerd[1969]: time="2025-07-07T00:02:09.607112400Z" level=info msg="StartContainer for \"30f15103e0b6120538bde60fc9f52081d934949611f2de47e3e9fcf35dadc38c\" returns successfully" Jul 7 00:02:09.687275 containerd[1969]: time="2025-07-07T00:02:09.685596329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d9458bfc6-g2s8x,Uid:323f6ecf-2b19-407b-9a25-f503af5af82a,Namespace:calico-system,Attempt:1,} returns sandbox id \"a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5\"" Jul 7 00:02:09.697424 systemd-networkd[1882]: cali3d619fdc3b3: Link UP Jul 7 00:02:09.707442 systemd-networkd[1882]: cali3d619fdc3b3: Gained carrier Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.112 [INFO][5222] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.141 [INFO][5222] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0 coredns-668d6bf9bc- kube-system 06399cd4-39d2-47a4-bc67-182674acacc1 998 0 2025-07-07 00:01:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-165 coredns-668d6bf9bc-mzs6t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3d619fdc3b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" Namespace="kube-system" Pod="coredns-668d6bf9bc-mzs6t" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.144 [INFO][5222] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" Namespace="kube-system" Pod="coredns-668d6bf9bc-mzs6t" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.432 [INFO][5284] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" HandleID="k8s-pod-network.0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.433 [INFO][5284] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" HandleID="k8s-pod-network.0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038c670), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-165", "pod":"coredns-668d6bf9bc-mzs6t", "timestamp":"2025-07-07 00:02:09.43270319 +0000 UTC"}, Hostname:"ip-172-31-20-165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.434 [INFO][5284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.434 [INFO][5284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.434 [INFO][5284] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-165' Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.476 [INFO][5284] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" host="ip-172-31-20-165" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.533 [INFO][5284] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-165" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.551 [INFO][5284] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.556 [INFO][5284] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.561 [INFO][5284] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.561 [INFO][5284] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" host="ip-172-31-20-165" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.567 [INFO][5284] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206 Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.586 [INFO][5284] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" host="ip-172-31-20-165" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.613 [INFO][5284] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.135/26] block=192.168.93.128/26 handle="k8s-pod-network.0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" host="ip-172-31-20-165" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.613 [INFO][5284] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.135/26] handle="k8s-pod-network.0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" host="ip-172-31-20-165" Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.613 [INFO][5284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:09.757303 containerd[1969]: 2025-07-07 00:02:09.613 [INFO][5284] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.135/26] IPv6=[] ContainerID="0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" HandleID="k8s-pod-network.0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:09.759583 containerd[1969]: 2025-07-07 00:02:09.630 [INFO][5222] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" Namespace="kube-system" Pod="coredns-668d6bf9bc-mzs6t" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"06399cd4-39d2-47a4-bc67-182674acacc1", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"", Pod:"coredns-668d6bf9bc-mzs6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d619fdc3b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:09.759583 containerd[1969]: 2025-07-07 00:02:09.633 [INFO][5222] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.135/32] ContainerID="0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" Namespace="kube-system" Pod="coredns-668d6bf9bc-mzs6t" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:09.759583 containerd[1969]: 2025-07-07 00:02:09.637 [INFO][5222] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d619fdc3b3 ContainerID="0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" Namespace="kube-system" Pod="coredns-668d6bf9bc-mzs6t" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:09.759583 containerd[1969]: 2025-07-07 00:02:09.707 [INFO][5222] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" Namespace="kube-system" Pod="coredns-668d6bf9bc-mzs6t" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:09.759583 containerd[1969]: 2025-07-07 00:02:09.712 [INFO][5222] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" Namespace="kube-system" Pod="coredns-668d6bf9bc-mzs6t" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"06399cd4-39d2-47a4-bc67-182674acacc1", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206", Pod:"coredns-668d6bf9bc-mzs6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d619fdc3b3", MAC:"26:3e:50:01:e2:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:09.759583 containerd[1969]: 2025-07-07 00:02:09.752 [INFO][5222] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206" Namespace="kube-system" Pod="coredns-668d6bf9bc-mzs6t" WorkloadEndpoint="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:09.809853 containerd[1969]: time="2025-07-07T00:02:09.809222053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:09.809853 containerd[1969]: time="2025-07-07T00:02:09.809640777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:09.809853 containerd[1969]: time="2025-07-07T00:02:09.809667882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:09.811909 containerd[1969]: time="2025-07-07T00:02:09.810855142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:09.876706 systemd[1]: Started cri-containerd-0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206.scope - libcontainer container 0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206. Jul 7 00:02:09.878233 systemd-networkd[1882]: calie1f59c638b5: Gained IPv6LL Jul 7 00:02:09.917191 systemd-networkd[1882]: caliafd25875ad2: Link UP Jul 7 00:02:09.917786 systemd-networkd[1882]: caliafd25875ad2: Gained carrier Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.341 [INFO][5238] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.405 [INFO][5238] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0 goldmane-768f4c5c69- calico-system 3424677f-768b-40b5-b2e4-d681424e64e2 996 0 2025-07-07 00:01:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-20-165 goldmane-768f4c5c69-8f2qq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliafd25875ad2 [] [] }} ContainerID="2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" Namespace="calico-system" Pod="goldmane-768f4c5c69-8f2qq" WorkloadEndpoint="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.406 [INFO][5238] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" Namespace="calico-system" Pod="goldmane-768f4c5c69-8f2qq" WorkloadEndpoint="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.529 [INFO][5324] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" HandleID="k8s-pod-network.2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.529 [INFO][5324] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" HandleID="k8s-pod-network.2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038f420), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-165", "pod":"goldmane-768f4c5c69-8f2qq", "timestamp":"2025-07-07 00:02:09.529375311 +0000 UTC"}, Hostname:"ip-172-31-20-165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.529 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.614 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.615 [INFO][5324] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-165' Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.678 [INFO][5324] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" host="ip-172-31-20-165" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.806 [INFO][5324] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-165" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.816 [INFO][5324] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.825 [INFO][5324] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.839 [INFO][5324] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ip-172-31-20-165" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.839 [INFO][5324] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" host="ip-172-31-20-165" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.845 [INFO][5324] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26 Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.858 [INFO][5324] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" host="ip-172-31-20-165" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.888 [INFO][5324] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.136/26] block=192.168.93.128/26 handle="k8s-pod-network.2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" host="ip-172-31-20-165" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.888 [INFO][5324] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.136/26] handle="k8s-pod-network.2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" host="ip-172-31-20-165" Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.888 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:09.978401 containerd[1969]: 2025-07-07 00:02:09.889 [INFO][5324] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.136/26] IPv6=[] ContainerID="2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" HandleID="k8s-pod-network.2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:09.979712 containerd[1969]: 2025-07-07 00:02:09.909 [INFO][5238] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" Namespace="calico-system" Pod="goldmane-768f4c5c69-8f2qq" WorkloadEndpoint="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"3424677f-768b-40b5-b2e4-d681424e64e2", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"", Pod:"goldmane-768f4c5c69-8f2qq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliafd25875ad2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:09.979712 containerd[1969]: 2025-07-07 00:02:09.909 [INFO][5238] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.136/32] ContainerID="2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" Namespace="calico-system" Pod="goldmane-768f4c5c69-8f2qq" WorkloadEndpoint="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:09.979712 containerd[1969]: 2025-07-07 00:02:09.913 [INFO][5238] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafd25875ad2 ContainerID="2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" Namespace="calico-system" Pod="goldmane-768f4c5c69-8f2qq" WorkloadEndpoint="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:09.979712 containerd[1969]: 2025-07-07 00:02:09.918 [INFO][5238] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" Namespace="calico-system" Pod="goldmane-768f4c5c69-8f2qq" WorkloadEndpoint="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:09.979712 containerd[1969]: 2025-07-07 00:02:09.919 [INFO][5238] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" Namespace="calico-system" Pod="goldmane-768f4c5c69-8f2qq" WorkloadEndpoint="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"3424677f-768b-40b5-b2e4-d681424e64e2", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26", Pod:"goldmane-768f4c5c69-8f2qq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliafd25875ad2", MAC:"92:5b:83:78:ff:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:09.979712 containerd[1969]: 2025-07-07 00:02:09.965 [INFO][5238] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26" Namespace="calico-system" Pod="goldmane-768f4c5c69-8f2qq" WorkloadEndpoint="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:10.131525 containerd[1969]: time="2025-07-07T00:02:10.131044577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:10.131525 containerd[1969]: time="2025-07-07T00:02:10.131128142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:10.131525 containerd[1969]: time="2025-07-07T00:02:10.131151745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:10.133510 systemd-networkd[1882]: cali95a79baf32e: Gained IPv6LL Jul 7 00:02:10.135323 containerd[1969]: time="2025-07-07T00:02:10.134566113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mzs6t,Uid:06399cd4-39d2-47a4-bc67-182674acacc1,Namespace:kube-system,Attempt:1,} returns sandbox id \"0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206\"" Jul 7 00:02:10.143879 containerd[1969]: time="2025-07-07T00:02:10.135847284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:10.162984 containerd[1969]: time="2025-07-07T00:02:10.161584547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f9b8bfdc-qtjjz,Uid:882b3fc4-2da7-43e6-941f-358a9356321e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f\"" Jul 7 00:02:10.162720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185937031.mount: Deactivated successfully. Jul 7 00:02:10.184535 containerd[1969]: time="2025-07-07T00:02:10.184464347Z" level=info msg="CreateContainer within sandbox \"0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:02:10.247010 containerd[1969]: time="2025-07-07T00:02:10.246775985Z" level=info msg="CreateContainer within sandbox \"0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67674cece1c0180faf13f056a38e3d57912e3f05b1ffb389bdd9578febf8d0ec\"" Jul 7 00:02:10.249497 containerd[1969]: time="2025-07-07T00:02:10.249411249Z" level=info msg="StartContainer for \"67674cece1c0180faf13f056a38e3d57912e3f05b1ffb389bdd9578febf8d0ec\"" Jul 7 00:02:10.250178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163482743.mount: Deactivated successfully. Jul 7 00:02:10.261628 systemd[1]: Started cri-containerd-2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26.scope - libcontainer container 2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26. Jul 7 00:02:10.379675 kubelet[3164]: I0707 00:02:10.378079 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bf6gq" podStartSLOduration=43.366966988 podStartE2EDuration="43.366966988s" podCreationTimestamp="2025-07-07 00:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:10.35642354 +0000 UTC m=+49.876220123" watchObservedRunningTime="2025-07-07 00:02:10.366966988 +0000 UTC m=+49.886763570" Jul 7 00:02:10.401825 systemd[1]: Started cri-containerd-67674cece1c0180faf13f056a38e3d57912e3f05b1ffb389bdd9578febf8d0ec.scope - libcontainer container 67674cece1c0180faf13f056a38e3d57912e3f05b1ffb389bdd9578febf8d0ec. Jul 7 00:02:10.515729 containerd[1969]: time="2025-07-07T00:02:10.515444945Z" level=info msg="StartContainer for \"67674cece1c0180faf13f056a38e3d57912e3f05b1ffb389bdd9578febf8d0ec\" returns successfully" Jul 7 00:02:10.771702 containerd[1969]: time="2025-07-07T00:02:10.771162964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8f2qq,Uid:3424677f-768b-40b5-b2e4-d681424e64e2,Namespace:calico-system,Attempt:1,} returns sandbox id \"2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26\"" Jul 7 00:02:10.966255 kubelet[3164]: I0707 00:02:10.965331 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:02:11.156459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2461281072.mount: Deactivated successfully. Jul 7 00:02:11.157473 systemd-networkd[1882]: cali3d619fdc3b3: Gained IPv6LL Jul 7 00:02:11.178024 containerd[1969]: time="2025-07-07T00:02:11.177517750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:11.181826 containerd[1969]: time="2025-07-07T00:02:11.181620229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:02:11.184228 containerd[1969]: time="2025-07-07T00:02:11.184145114Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:11.190305 containerd[1969]: time="2025-07-07T00:02:11.190257507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:11.192274 containerd[1969]: time="2025-07-07T00:02:11.192053744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.910146828s" Jul 7 00:02:11.192274 containerd[1969]: time="2025-07-07T00:02:11.192105852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:02:11.198187 containerd[1969]: time="2025-07-07T00:02:11.197938908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:02:11.203022 containerd[1969]: time="2025-07-07T00:02:11.202132596Z" level=info msg="CreateContainer within sandbox \"5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:02:11.235073 containerd[1969]: time="2025-07-07T00:02:11.234689973Z" level=info msg="CreateContainer within sandbox \"5f60502a452b155edb740d3682be255a45d13b13e14b404950deacc9ea3fbbdc\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c807f36127c82d842ef4e7092cec03c020b09de3dc3a89e59f4a20a5c2a0da30\"" Jul 7 00:02:11.236265 containerd[1969]: time="2025-07-07T00:02:11.235572552Z" level=info msg="StartContainer for \"c807f36127c82d842ef4e7092cec03c020b09de3dc3a89e59f4a20a5c2a0da30\"" Jul 7 00:02:11.341455 systemd[1]: run-containerd-runc-k8s.io-c807f36127c82d842ef4e7092cec03c020b09de3dc3a89e59f4a20a5c2a0da30-runc.w9hHat.mount: Deactivated successfully. Jul 7 00:02:11.358035 systemd[1]: Started cri-containerd-c807f36127c82d842ef4e7092cec03c020b09de3dc3a89e59f4a20a5c2a0da30.scope - libcontainer container c807f36127c82d842ef4e7092cec03c020b09de3dc3a89e59f4a20a5c2a0da30. Jul 7 00:02:11.417755 kubelet[3164]: I0707 00:02:11.416691 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mzs6t" podStartSLOduration=44.416665723 podStartE2EDuration="44.416665723s" podCreationTimestamp="2025-07-07 00:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:11.383525291 +0000 UTC m=+50.903321878" watchObservedRunningTime="2025-07-07 00:02:11.416665723 +0000 UTC m=+50.936462311" Jul 7 00:02:11.492935 containerd[1969]: time="2025-07-07T00:02:11.492810317Z" level=info msg="StartContainer for \"c807f36127c82d842ef4e7092cec03c020b09de3dc3a89e59f4a20a5c2a0da30\" returns successfully" Jul 7 00:02:11.580729 kubelet[3164]: I0707 00:02:11.580646 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:02:11.606467 systemd-networkd[1882]: caliafd25875ad2: Gained IPv6LL Jul 7 00:02:12.686270 kernel: bpftool[5664]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 00:02:13.010407 systemd-networkd[1882]: vxlan.calico: Link UP Jul 7 00:02:13.010419 systemd-networkd[1882]: vxlan.calico: Gained carrier Jul 7 00:02:13.060371 (udev-worker)[4608]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:02:13.688442 systemd[1]: Started sshd@8-172.31.20.165:22-147.75.109.163:34654.service - OpenSSH per-connection server daemon (147.75.109.163:34654). Jul 7 00:02:13.922978 sshd[5747]: Accepted publickey for core from 147.75.109.163 port 34654 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:13.925730 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:13.932305 systemd-logind[1958]: New session 9 of user core. Jul 7 00:02:13.937471 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:02:14.037617 systemd-networkd[1882]: vxlan.calico: Gained IPv6LL Jul 7 00:02:14.722612 sshd[5747]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:14.728728 systemd[1]: sshd@8-172.31.20.165:22-147.75.109.163:34654.service: Deactivated successfully. Jul 7 00:02:14.740971 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:02:14.747055 systemd-logind[1958]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:02:14.753089 systemd-logind[1958]: Removed session 9. Jul 7 00:02:16.784919 ntpd[1953]: Listen normally on 8 vxlan.calico 192.168.93.128:123 Jul 7 00:02:16.786333 ntpd[1953]: 7 Jul 00:02:16 ntpd[1953]: Listen normally on 8 vxlan.calico 192.168.93.128:123 Jul 7 00:02:16.786333 ntpd[1953]: 7 Jul 00:02:16 ntpd[1953]: Listen normally on 9 calidf6e089db00 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 7 00:02:16.786333 ntpd[1953]: 7 Jul 00:02:16 ntpd[1953]: Listen normally on 10 cali8318b63015f [fe80::ecee:eeff:feee:eeee%5]:123 Jul 7 00:02:16.786333 ntpd[1953]: 7 Jul 00:02:16 ntpd[1953]: Listen normally on 11 cali4642b6fa2c4 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 7 00:02:16.786333 ntpd[1953]: 7 Jul 00:02:16 ntpd[1953]: Listen normally on 12 calie1f59c638b5 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 7 00:02:16.786333 ntpd[1953]: 7 Jul 00:02:16 ntpd[1953]: Listen normally on 13 cali3f0ef2e10e0 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 7 00:02:16.786333 ntpd[1953]: 7 Jul 00:02:16 ntpd[1953]: Listen normally on 14 cali95a79baf32e [fe80::ecee:eeff:feee:eeee%9]:123 Jul 7 00:02:16.786333 ntpd[1953]: 7 Jul 00:02:16 ntpd[1953]: Listen normally on 15 cali3d619fdc3b3 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 7 00:02:16.786333 ntpd[1953]: 7 Jul 00:02:16 ntpd[1953]: Listen normally on 16 caliafd25875ad2 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 7 00:02:16.786333 ntpd[1953]: 7 Jul 00:02:16 ntpd[1953]: Listen normally on 17 vxlan.calico [fe80::64a9:deff:fecd:c147%12]:123 Jul 7 00:02:16.785812 ntpd[1953]: Listen normally on 9 calidf6e089db00 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 7 00:02:16.785874 ntpd[1953]: Listen normally on 10 cali8318b63015f [fe80::ecee:eeff:feee:eeee%5]:123 Jul 7 00:02:16.785914 ntpd[1953]: Listen normally on 11 cali4642b6fa2c4 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 7 00:02:16.785952 ntpd[1953]: Listen normally on 12 calie1f59c638b5 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 7 00:02:16.785990 ntpd[1953]: Listen normally on 13 cali3f0ef2e10e0 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 7 00:02:16.786030 ntpd[1953]: Listen normally on 14 cali95a79baf32e [fe80::ecee:eeff:feee:eeee%9]:123 Jul 7 00:02:16.786078 ntpd[1953]: Listen normally on 15 cali3d619fdc3b3 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 7 00:02:16.786114 ntpd[1953]: Listen normally on 16 caliafd25875ad2 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 7 00:02:16.786159 ntpd[1953]: Listen normally on 17 vxlan.calico [fe80::64a9:deff:fecd:c147%12]:123 Jul 7 00:02:16.972154 containerd[1969]: time="2025-07-07T00:02:16.972090388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:16.974232 containerd[1969]: time="2025-07-07T00:02:16.973957681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:02:16.977709 containerd[1969]: time="2025-07-07T00:02:16.976292783Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:16.980147 containerd[1969]: time="2025-07-07T00:02:16.980078148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:16.981071 containerd[1969]: time="2025-07-07T00:02:16.981025668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.783040618s" Jul 7 00:02:16.981387 containerd[1969]: time="2025-07-07T00:02:16.981225549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:02:16.983097 containerd[1969]: time="2025-07-07T00:02:16.982881688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:02:17.025258 containerd[1969]: time="2025-07-07T00:02:17.024479881Z" level=info msg="CreateContainer within sandbox \"d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:02:17.057609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637966804.mount: Deactivated successfully. Jul 7 00:02:17.098254 containerd[1969]: time="2025-07-07T00:02:17.098180951Z" level=info msg="CreateContainer within sandbox \"d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"024c133eacfdded93520c98501c1e01ff9c74e79eea1d5bc2a91d63a1ae70e1c\"" Jul 7 00:02:17.099818 containerd[1969]: time="2025-07-07T00:02:17.099534769Z" level=info msg="StartContainer for \"024c133eacfdded93520c98501c1e01ff9c74e79eea1d5bc2a91d63a1ae70e1c\"" Jul 7 00:02:17.192530 systemd[1]: Started cri-containerd-024c133eacfdded93520c98501c1e01ff9c74e79eea1d5bc2a91d63a1ae70e1c.scope - libcontainer container 024c133eacfdded93520c98501c1e01ff9c74e79eea1d5bc2a91d63a1ae70e1c. Jul 7 00:02:17.272571 containerd[1969]: time="2025-07-07T00:02:17.272384461Z" level=info msg="StartContainer for \"024c133eacfdded93520c98501c1e01ff9c74e79eea1d5bc2a91d63a1ae70e1c\" returns successfully" Jul 7 00:02:17.562377 kubelet[3164]: I0707 00:02:17.562285 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7b4757d6fc-vvxrz" podStartSLOduration=7.877513174 podStartE2EDuration="15.562233402s" podCreationTimestamp="2025-07-07 00:02:02 +0000 UTC" firstStartedPulling="2025-07-07 00:02:03.510622568 +0000 UTC m=+43.030419145" lastFinishedPulling="2025-07-07 00:02:11.195342776 +0000 UTC m=+50.715139373" observedRunningTime="2025-07-07 00:02:12.40461261 +0000 UTC m=+51.924409199" watchObservedRunningTime="2025-07-07 00:02:17.562233402 +0000 UTC m=+57.082029986" Jul 7 00:02:18.881392 containerd[1969]: time="2025-07-07T00:02:18.881336717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:18.882579 containerd[1969]: time="2025-07-07T00:02:18.882406977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:02:18.884621 containerd[1969]: time="2025-07-07T00:02:18.883530145Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:18.886316 containerd[1969]: time="2025-07-07T00:02:18.886250142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:18.887042 containerd[1969]: time="2025-07-07T00:02:18.886993507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.904077988s" Jul 7 00:02:18.887042 containerd[1969]: time="2025-07-07T00:02:18.887029843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:02:18.889137 containerd[1969]: time="2025-07-07T00:02:18.888420704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:02:18.892722 containerd[1969]: time="2025-07-07T00:02:18.892679498Z" level=info msg="CreateContainer within sandbox \"021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:02:18.961679 containerd[1969]: time="2025-07-07T00:02:18.961532817Z" level=info msg="CreateContainer within sandbox \"021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e816b1a73b1f82cc8af1a0c53df4506ae9eb009c0baba0e76c737959b96f1a03\"" Jul 7 00:02:18.962491 containerd[1969]: time="2025-07-07T00:02:18.962460132Z" level=info msg="StartContainer for \"e816b1a73b1f82cc8af1a0c53df4506ae9eb009c0baba0e76c737959b96f1a03\"" Jul 7 00:02:19.040110 systemd[1]: Started cri-containerd-e816b1a73b1f82cc8af1a0c53df4506ae9eb009c0baba0e76c737959b96f1a03.scope - libcontainer container e816b1a73b1f82cc8af1a0c53df4506ae9eb009c0baba0e76c737959b96f1a03. Jul 7 00:02:19.133043 containerd[1969]: time="2025-07-07T00:02:19.132909902Z" level=info msg="StartContainer for \"e816b1a73b1f82cc8af1a0c53df4506ae9eb009c0baba0e76c737959b96f1a03\" returns successfully" Jul 7 00:02:19.244749 kubelet[3164]: I0707 00:02:19.244680 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68f9b8bfdc-lcq8w" podStartSLOduration=31.62818494 podStartE2EDuration="43.244652157s" podCreationTimestamp="2025-07-07 00:01:36 +0000 UTC" firstStartedPulling="2025-07-07 00:02:05.366161738 +0000 UTC m=+44.885958318" lastFinishedPulling="2025-07-07 00:02:16.982628958 +0000 UTC m=+56.502425535" observedRunningTime="2025-07-07 00:02:17.561939703 +0000 UTC m=+57.081736290" watchObservedRunningTime="2025-07-07 00:02:19.244652157 +0000 UTC m=+58.764448741" Jul 7 00:02:19.758093 systemd[1]: Started sshd@9-172.31.20.165:22-147.75.109.163:46954.service - OpenSSH per-connection server daemon (147.75.109.163:46954). Jul 7 00:02:20.009528 sshd[5865]: Accepted publickey for core from 147.75.109.163 port 46954 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:20.022280 sshd[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:20.064917 systemd-logind[1958]: New session 10 of user core. Jul 7 00:02:20.077353 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:02:21.649915 containerd[1969]: time="2025-07-07T00:02:21.648205494Z" level=info msg="StopPodSandbox for \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\"" Jul 7 00:02:21.836847 sshd[5865]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:21.849364 systemd-logind[1958]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:02:21.849980 systemd[1]: sshd@9-172.31.20.165:22-147.75.109.163:46954.service: Deactivated successfully. Jul 7 00:02:21.855966 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:02:21.889636 systemd-logind[1958]: Removed session 10. Jul 7 00:02:21.893663 systemd[1]: Started sshd@10-172.31.20.165:22-147.75.109.163:46966.service - OpenSSH per-connection server daemon (147.75.109.163:46966). Jul 7 00:02:22.145124 sshd[5901]: Accepted publickey for core from 147.75.109.163 port 46966 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:22.160606 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:22.168576 systemd-logind[1958]: New session 11 of user core. Jul 7 00:02:22.173436 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.300 [WARNING][5894] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" WorkloadEndpoint="ip--172--31--20--165-k8s-whisker--55469f4b7--fbf5p-eth0" Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.300 [INFO][5894] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.301 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" iface="eth0" netns="" Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.301 [INFO][5894] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.301 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.901 [INFO][5908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" HandleID="k8s-pod-network.4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Workload="ip--172--31--20--165-k8s-whisker--55469f4b7--fbf5p-eth0" Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.907 [INFO][5908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.907 [INFO][5908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.930 [WARNING][5908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" HandleID="k8s-pod-network.4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Workload="ip--172--31--20--165-k8s-whisker--55469f4b7--fbf5p-eth0" Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.935 [INFO][5908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" HandleID="k8s-pod-network.4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Workload="ip--172--31--20--165-k8s-whisker--55469f4b7--fbf5p-eth0" Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.939 [INFO][5908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:22.947267 containerd[1969]: 2025-07-07 00:02:22.943 [INFO][5894] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:22.964195 containerd[1969]: time="2025-07-07T00:02:22.964133756Z" level=info msg="TearDown network for sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\" successfully" Jul 7 00:02:22.964195 containerd[1969]: time="2025-07-07T00:02:22.964190278Z" level=info msg="StopPodSandbox for \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\" returns successfully" Jul 7 00:02:23.090020 containerd[1969]: time="2025-07-07T00:02:23.089911097Z" level=info msg="RemovePodSandbox for \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\"" Jul 7 00:02:23.096671 containerd[1969]: time="2025-07-07T00:02:23.096544133Z" level=info msg="Forcibly stopping sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\"" Jul 7 00:02:23.142130 sshd[5901]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:23.160555 systemd[1]: sshd@10-172.31.20.165:22-147.75.109.163:46966.service: Deactivated successfully. Jul 7 00:02:23.166133 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:02:23.183829 systemd-logind[1958]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:02:23.194537 systemd[1]: Started sshd@11-172.31.20.165:22-147.75.109.163:46978.service - OpenSSH per-connection server daemon (147.75.109.163:46978). Jul 7 00:02:23.198664 systemd-logind[1958]: Removed session 11. Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.210 [WARNING][5927] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" WorkloadEndpoint="ip--172--31--20--165-k8s-whisker--55469f4b7--fbf5p-eth0" Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.211 [INFO][5927] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.211 [INFO][5927] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" iface="eth0" netns="" Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.211 [INFO][5927] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.211 [INFO][5927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.286 [INFO][5938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" HandleID="k8s-pod-network.4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Workload="ip--172--31--20--165-k8s-whisker--55469f4b7--fbf5p-eth0" Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.287 [INFO][5938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.287 [INFO][5938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.293 [WARNING][5938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" HandleID="k8s-pod-network.4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Workload="ip--172--31--20--165-k8s-whisker--55469f4b7--fbf5p-eth0" Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.293 [INFO][5938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" HandleID="k8s-pod-network.4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Workload="ip--172--31--20--165-k8s-whisker--55469f4b7--fbf5p-eth0" Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.295 [INFO][5938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:23.301959 containerd[1969]: 2025-07-07 00:02:23.298 [INFO][5927] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9" Jul 7 00:02:23.303630 containerd[1969]: time="2025-07-07T00:02:23.301953428Z" level=info msg="TearDown network for sandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\" successfully" Jul 7 00:02:23.318921 containerd[1969]: time="2025-07-07T00:02:23.318183485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:02:23.334495 containerd[1969]: time="2025-07-07T00:02:23.334329006Z" level=info msg="RemovePodSandbox \"4bddf868b97c75187aedd11f18bf2fd7fa7afb632d051e06f216a69db0aafdf9\" returns successfully" Jul 7 00:02:23.345316 containerd[1969]: time="2025-07-07T00:02:23.345275218Z" level=info msg="StopPodSandbox for \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\"" Jul 7 00:02:23.430142 sshd[5936]: Accepted publickey for core from 147.75.109.163 port 46978 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:23.433173 sshd[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:23.443581 systemd-logind[1958]: New session 12 of user core. Jul 7 00:02:23.452689 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.395 [WARNING][5953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"06399cd4-39d2-47a4-bc67-182674acacc1", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206", Pod:"coredns-668d6bf9bc-mzs6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d619fdc3b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.395 [INFO][5953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.395 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" iface="eth0" netns="" Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.396 [INFO][5953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.396 [INFO][5953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.434 [INFO][5960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" HandleID="k8s-pod-network.f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.434 [INFO][5960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.434 [INFO][5960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.444 [WARNING][5960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" HandleID="k8s-pod-network.f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.444 [INFO][5960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" HandleID="k8s-pod-network.f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.448 [INFO][5960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:23.461124 containerd[1969]: 2025-07-07 00:02:23.452 [INFO][5953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:23.470091 containerd[1969]: time="2025-07-07T00:02:23.461492697Z" level=info msg="TearDown network for sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\" successfully" Jul 7 00:02:23.470091 containerd[1969]: time="2025-07-07T00:02:23.461526076Z" level=info msg="StopPodSandbox for \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\" returns successfully" Jul 7 00:02:23.470091 containerd[1969]: time="2025-07-07T00:02:23.465569899Z" level=info msg="RemovePodSandbox for \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\"" Jul 7 00:02:23.470091 containerd[1969]: time="2025-07-07T00:02:23.465609570Z" level=info msg="Forcibly stopping sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\"" Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.543 [WARNING][5975] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"06399cd4-39d2-47a4-bc67-182674acacc1", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"0a8250f9e771fa6872b64b767306993ecd550aaf9101489e502f773104394206", Pod:"coredns-668d6bf9bc-mzs6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d619fdc3b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.545 [INFO][5975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.546 [INFO][5975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" iface="eth0" netns="" Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.547 [INFO][5975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.547 [INFO][5975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.645 [INFO][5987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" HandleID="k8s-pod-network.f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.647 [INFO][5987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.648 [INFO][5987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.667 [WARNING][5987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" HandleID="k8s-pod-network.f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.667 [INFO][5987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" HandleID="k8s-pod-network.f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--mzs6t-eth0" Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.674 [INFO][5987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:23.694754 containerd[1969]: 2025-07-07 00:02:23.686 [INFO][5975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e" Jul 7 00:02:23.698021 containerd[1969]: time="2025-07-07T00:02:23.694805230Z" level=info msg="TearDown network for sandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\" successfully" Jul 7 00:02:23.768054 containerd[1969]: time="2025-07-07T00:02:23.766144961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:02:23.768054 containerd[1969]: time="2025-07-07T00:02:23.766255926Z" level=info msg="RemovePodSandbox \"f32f5aa8a98d5b817e80250ae6754f265ac140ae6015e395192fbdc0b0d1210e\" returns successfully" Jul 7 00:02:23.770267 containerd[1969]: time="2025-07-07T00:02:23.769853115Z" level=info msg="StopPodSandbox for \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\"" Jul 7 00:02:23.992612 sshd[5936]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:24.005599 systemd[1]: sshd@11-172.31.20.165:22-147.75.109.163:46978.service: Deactivated successfully. Jul 7 00:02:24.013140 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:02:24.020336 systemd-logind[1958]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:02:24.023605 systemd-logind[1958]: Removed session 12. Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:23.930 [WARNING][6015] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"3424677f-768b-40b5-b2e4-d681424e64e2", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26", Pod:"goldmane-768f4c5c69-8f2qq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliafd25875ad2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:23.931 [INFO][6015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:23.931 [INFO][6015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" iface="eth0" netns="" Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:23.932 [INFO][6015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:23.932 [INFO][6015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:23.993 [INFO][6022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" HandleID="k8s-pod-network.be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:23.996 [INFO][6022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:23.996 [INFO][6022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:24.011 [WARNING][6022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" HandleID="k8s-pod-network.be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:24.011 [INFO][6022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" HandleID="k8s-pod-network.be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:24.018 [INFO][6022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:24.037093 containerd[1969]: 2025-07-07 00:02:24.029 [INFO][6015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:24.041191 containerd[1969]: time="2025-07-07T00:02:24.037324046Z" level=info msg="TearDown network for sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\" successfully" Jul 7 00:02:24.041191 containerd[1969]: time="2025-07-07T00:02:24.037358820Z" level=info msg="StopPodSandbox for \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\" returns successfully" Jul 7 00:02:24.053274 containerd[1969]: time="2025-07-07T00:02:24.052918754Z" level=info msg="RemovePodSandbox for \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\"" Jul 7 00:02:24.053274 containerd[1969]: time="2025-07-07T00:02:24.052963229Z" level=info msg="Forcibly stopping sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\"" Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.127 [WARNING][6038] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"3424677f-768b-40b5-b2e4-d681424e64e2", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26", Pod:"goldmane-768f4c5c69-8f2qq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliafd25875ad2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.127 [INFO][6038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.127 [INFO][6038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" iface="eth0" netns="" Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.127 [INFO][6038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.128 [INFO][6038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.175 [INFO][6045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" HandleID="k8s-pod-network.be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.175 [INFO][6045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.175 [INFO][6045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.186 [WARNING][6045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" HandleID="k8s-pod-network.be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.187 [INFO][6045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" HandleID="k8s-pod-network.be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Workload="ip--172--31--20--165-k8s-goldmane--768f4c5c69--8f2qq-eth0" Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.189 [INFO][6045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:24.194543 containerd[1969]: 2025-07-07 00:02:24.192 [INFO][6038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb" Jul 7 00:02:24.195860 containerd[1969]: time="2025-07-07T00:02:24.194586665Z" level=info msg="TearDown network for sandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\" successfully" Jul 7 00:02:24.201874 containerd[1969]: time="2025-07-07T00:02:24.201681885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:02:24.201874 containerd[1969]: time="2025-07-07T00:02:24.201839419Z" level=info msg="RemovePodSandbox \"be1656ba39ce7ade116172899ddea535f4e05a1220b566b2745e3150d9a934eb\" returns successfully" Jul 7 00:02:24.203181 containerd[1969]: time="2025-07-07T00:02:24.203138425Z" level=info msg="StopPodSandbox for \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\"" Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.288 [WARNING][6060] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0", GenerateName:"calico-kube-controllers-7d9458bfc6-", Namespace:"calico-system", SelfLink:"", UID:"323f6ecf-2b19-407b-9a25-f503af5af82a", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d9458bfc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5", Pod:"calico-kube-controllers-7d9458bfc6-g2s8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f0ef2e10e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.288 [INFO][6060] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.289 [INFO][6060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" iface="eth0" netns="" Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.289 [INFO][6060] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.289 [INFO][6060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.335 [INFO][6068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" HandleID="k8s-pod-network.8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.335 [INFO][6068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.335 [INFO][6068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.345 [WARNING][6068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" HandleID="k8s-pod-network.8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.346 [INFO][6068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" HandleID="k8s-pod-network.8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.352 [INFO][6068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:24.359524 containerd[1969]: 2025-07-07 00:02:24.356 [INFO][6060] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:24.359524 containerd[1969]: time="2025-07-07T00:02:24.359468461Z" level=info msg="TearDown network for sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\" successfully" Jul 7 00:02:24.359524 containerd[1969]: time="2025-07-07T00:02:24.359506778Z" level=info msg="StopPodSandbox for \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\" returns successfully" Jul 7 00:02:24.392702 containerd[1969]: time="2025-07-07T00:02:24.361475991Z" level=info msg="RemovePodSandbox for \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\"" Jul 7 00:02:24.392702 containerd[1969]: time="2025-07-07T00:02:24.361597360Z" level=info msg="Forcibly stopping sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\"" Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.456 [WARNING][6082] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0", GenerateName:"calico-kube-controllers-7d9458bfc6-", Namespace:"calico-system", SelfLink:"", UID:"323f6ecf-2b19-407b-9a25-f503af5af82a", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d9458bfc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5", Pod:"calico-kube-controllers-7d9458bfc6-g2s8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f0ef2e10e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.456 [INFO][6082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.456 [INFO][6082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" iface="eth0" netns="" Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.456 [INFO][6082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.456 [INFO][6082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.498 [INFO][6090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" HandleID="k8s-pod-network.8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.499 [INFO][6090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.499 [INFO][6090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.510 [WARNING][6090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" HandleID="k8s-pod-network.8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.510 [INFO][6090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" HandleID="k8s-pod-network.8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Workload="ip--172--31--20--165-k8s-calico--kube--controllers--7d9458bfc6--g2s8x-eth0" Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.514 [INFO][6090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:24.525048 containerd[1969]: 2025-07-07 00:02:24.518 [INFO][6082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635" Jul 7 00:02:24.525048 containerd[1969]: time="2025-07-07T00:02:24.524714288Z" level=info msg="TearDown network for sandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\" successfully" Jul 7 00:02:24.545399 containerd[1969]: time="2025-07-07T00:02:24.544253679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:02:24.545399 containerd[1969]: time="2025-07-07T00:02:24.544710448Z" level=info msg="RemovePodSandbox \"8e3ba9e1795e9f3e4dcdafe5f27b8853bad35a952a5e4c17e5c9162efc60e635\" returns successfully" Jul 7 00:02:24.579911 containerd[1969]: time="2025-07-07T00:02:24.575633028Z" level=info msg="StopPodSandbox for \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\"" Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.757 [WARNING][6104] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0", GenerateName:"calico-apiserver-68f9b8bfdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"882b3fc4-2da7-43e6-941f-358a9356321e", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f9b8bfdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f", Pod:"calico-apiserver-68f9b8bfdc-qtjjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95a79baf32e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.757 [INFO][6104] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.757 [INFO][6104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" iface="eth0" netns="" Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.757 [INFO][6104] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.757 [INFO][6104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.843 [INFO][6112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" HandleID="k8s-pod-network.a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.844 [INFO][6112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.844 [INFO][6112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.866 [WARNING][6112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" HandleID="k8s-pod-network.a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.866 [INFO][6112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" HandleID="k8s-pod-network.a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.871 [INFO][6112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:24.884015 containerd[1969]: 2025-07-07 00:02:24.876 [INFO][6104] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:24.884015 containerd[1969]: time="2025-07-07T00:02:24.883734076Z" level=info msg="TearDown network for sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\" successfully" Jul 7 00:02:24.884015 containerd[1969]: time="2025-07-07T00:02:24.883782819Z" level=info msg="StopPodSandbox for \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\" returns successfully" Jul 7 00:02:24.885675 containerd[1969]: time="2025-07-07T00:02:24.885402409Z" level=info msg="RemovePodSandbox for \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\"" Jul 7 00:02:24.885675 containerd[1969]: time="2025-07-07T00:02:24.885441554Z" level=info msg="Forcibly stopping sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\"" Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:24.960 [WARNING][6126] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0", GenerateName:"calico-apiserver-68f9b8bfdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"882b3fc4-2da7-43e6-941f-358a9356321e", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f9b8bfdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f", Pod:"calico-apiserver-68f9b8bfdc-qtjjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95a79baf32e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:24.961 [INFO][6126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:24.961 [INFO][6126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" iface="eth0" netns="" Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:24.961 [INFO][6126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:24.961 [INFO][6126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:25.003 [INFO][6134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" HandleID="k8s-pod-network.a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:25.003 [INFO][6134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:25.003 [INFO][6134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:25.018 [WARNING][6134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" HandleID="k8s-pod-network.a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:25.018 [INFO][6134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" HandleID="k8s-pod-network.a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--qtjjz-eth0" Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:25.021 [INFO][6134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:25.031354 containerd[1969]: 2025-07-07 00:02:25.026 [INFO][6126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995" Jul 7 00:02:25.031354 containerd[1969]: time="2025-07-07T00:02:25.030168016Z" level=info msg="TearDown network for sandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\" successfully" Jul 7 00:02:25.040996 containerd[1969]: time="2025-07-07T00:02:25.039552554Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:02:25.040996 containerd[1969]: time="2025-07-07T00:02:25.040033592Z" level=info msg="RemovePodSandbox \"a08e5b77a419af1e6945af4e89dff44bcbe918720a00ac3096a0c8bbacedb995\" returns successfully" Jul 7 00:02:25.040996 containerd[1969]: time="2025-07-07T00:02:25.040985027Z" level=info msg="StopPodSandbox for \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\"" Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.153 [WARNING][6148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0", GenerateName:"calico-apiserver-68f9b8bfdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"abc995fc-0743-4302-b622-a778795b2bb6", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f9b8bfdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264", Pod:"calico-apiserver-68f9b8bfdc-lcq8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8318b63015f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.154 [INFO][6148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.154 [INFO][6148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" iface="eth0" netns="" Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.154 [INFO][6148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.154 [INFO][6148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.197 [INFO][6156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" HandleID="k8s-pod-network.adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.198 [INFO][6156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.198 [INFO][6156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.212 [WARNING][6156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" HandleID="k8s-pod-network.adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.212 [INFO][6156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" HandleID="k8s-pod-network.adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.217 [INFO][6156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:25.231085 containerd[1969]: 2025-07-07 00:02:25.222 [INFO][6148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:25.232606 containerd[1969]: time="2025-07-07T00:02:25.232126081Z" level=info msg="TearDown network for sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\" successfully" Jul 7 00:02:25.232606 containerd[1969]: time="2025-07-07T00:02:25.232164979Z" level=info msg="StopPodSandbox for \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\" returns successfully" Jul 7 00:02:25.233788 containerd[1969]: time="2025-07-07T00:02:25.233224073Z" level=info msg="RemovePodSandbox for \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\"" Jul 7 00:02:25.233788 containerd[1969]: time="2025-07-07T00:02:25.233275086Z" level=info msg="Forcibly stopping sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\"" Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.339 [WARNING][6173] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0", GenerateName:"calico-apiserver-68f9b8bfdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"abc995fc-0743-4302-b622-a778795b2bb6", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f9b8bfdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"d99cb9c39c4065e4dcac487be2cb7aa0182e66e71fde9ee37787e9220836f264", Pod:"calico-apiserver-68f9b8bfdc-lcq8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8318b63015f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.341 [INFO][6173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.341 [INFO][6173] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" iface="eth0" netns="" Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.341 [INFO][6173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.341 [INFO][6173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.399 [INFO][6180] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" HandleID="k8s-pod-network.adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.399 [INFO][6180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.399 [INFO][6180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.409 [WARNING][6180] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" HandleID="k8s-pod-network.adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.409 [INFO][6180] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" HandleID="k8s-pod-network.adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Workload="ip--172--31--20--165-k8s-calico--apiserver--68f9b8bfdc--lcq8w-eth0" Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.412 [INFO][6180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:25.421703 containerd[1969]: 2025-07-07 00:02:25.418 [INFO][6173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285" Jul 7 00:02:25.424342 containerd[1969]: time="2025-07-07T00:02:25.421759511Z" level=info msg="TearDown network for sandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\" successfully" Jul 7 00:02:25.433162 containerd[1969]: time="2025-07-07T00:02:25.433111769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:02:25.433327 containerd[1969]: time="2025-07-07T00:02:25.433194167Z" level=info msg="RemovePodSandbox \"adba85a7a23141e96ec205eef3780a021d6a4c9f888dcf1e4b233a3c7a5be285\" returns successfully" Jul 7 00:02:25.434616 containerd[1969]: time="2025-07-07T00:02:25.434520276Z" level=info msg="StopPodSandbox for \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\"" Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.523 [WARNING][6195] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253", Pod:"coredns-668d6bf9bc-bf6gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1f59c638b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.523 [INFO][6195] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.523 [INFO][6195] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" iface="eth0" netns="" Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.523 [INFO][6195] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.523 [INFO][6195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.569 [INFO][6202] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" HandleID="k8s-pod-network.a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.569 [INFO][6202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.569 [INFO][6202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.585 [WARNING][6202] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" HandleID="k8s-pod-network.a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.585 [INFO][6202] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" HandleID="k8s-pod-network.a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.588 [INFO][6202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:25.598827 containerd[1969]: 2025-07-07 00:02:25.592 [INFO][6195] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:25.601008 containerd[1969]: time="2025-07-07T00:02:25.598803514Z" level=info msg="TearDown network for sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\" successfully" Jul 7 00:02:25.601089 containerd[1969]: time="2025-07-07T00:02:25.601006368Z" level=info msg="StopPodSandbox for \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\" returns successfully" Jul 7 00:02:25.602725 containerd[1969]: time="2025-07-07T00:02:25.601688715Z" level=info msg="RemovePodSandbox for \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\"" Jul 7 00:02:25.602725 containerd[1969]: time="2025-07-07T00:02:25.601735121Z" level=info msg="Forcibly stopping sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\"" Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.733 [WARNING][6217] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e01e08e-a6ee-4ae7-9f88-e30bb7b73f15", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"6c03fc74ed4ce2b8540b24727b064d97861a4ca013362b495e2acd8f99847253", Pod:"coredns-668d6bf9bc-bf6gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1f59c638b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.735 [INFO][6217] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.736 [INFO][6217] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" iface="eth0" netns="" Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.736 [INFO][6217] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.736 [INFO][6217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.803 [INFO][6224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" HandleID="k8s-pod-network.a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.806 [INFO][6224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.806 [INFO][6224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.824 [WARNING][6224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" HandleID="k8s-pod-network.a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.824 [INFO][6224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" HandleID="k8s-pod-network.a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Workload="ip--172--31--20--165-k8s-coredns--668d6bf9bc--bf6gq-eth0" Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.827 [INFO][6224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:25.836590 containerd[1969]: 2025-07-07 00:02:25.830 [INFO][6217] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1" Jul 7 00:02:25.838300 containerd[1969]: time="2025-07-07T00:02:25.836653450Z" level=info msg="TearDown network for sandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\" successfully" Jul 7 00:02:25.851752 containerd[1969]: time="2025-07-07T00:02:25.851468283Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:02:25.851752 containerd[1969]: time="2025-07-07T00:02:25.851550967Z" level=info msg="RemovePodSandbox \"a44b5103e2279c832ab9b331969f0947a8ffe595233ec2bcf6c532627e7858e1\" returns successfully" Jul 7 00:02:25.853260 containerd[1969]: time="2025-07-07T00:02:25.853205245Z" level=info msg="StopPodSandbox for \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\"" Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:25.937 [WARNING][6239] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15afd7c1-8f7a-46ff-bed1-960e12f70585", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b", Pod:"csi-node-driver-prlf8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4642b6fa2c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:25.938 [INFO][6239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:25.938 [INFO][6239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" iface="eth0" netns="" Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:25.938 [INFO][6239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:25.938 [INFO][6239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:26.002 [INFO][6246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" HandleID="k8s-pod-network.57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:26.003 [INFO][6246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:26.003 [INFO][6246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:26.018 [WARNING][6246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" HandleID="k8s-pod-network.57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:26.018 [INFO][6246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" HandleID="k8s-pod-network.57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:26.021 [INFO][6246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:26.031923 containerd[1969]: 2025-07-07 00:02:26.027 [INFO][6239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:26.031923 containerd[1969]: time="2025-07-07T00:02:26.031872744Z" level=info msg="TearDown network for sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\" successfully" Jul 7 00:02:26.031923 containerd[1969]: time="2025-07-07T00:02:26.031903940Z" level=info msg="StopPodSandbox for \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\" returns successfully" Jul 7 00:02:26.034164 containerd[1969]: time="2025-07-07T00:02:26.032834702Z" level=info msg="RemovePodSandbox for \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\"" Jul 7 00:02:26.034164 containerd[1969]: time="2025-07-07T00:02:26.032868725Z" level=info msg="Forcibly stopping sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\"" Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.110 [WARNING][6260] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15afd7c1-8f7a-46ff-bed1-960e12f70585", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-165", ContainerID:"021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b", Pod:"csi-node-driver-prlf8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4642b6fa2c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.110 [INFO][6260] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.110 [INFO][6260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" iface="eth0" netns="" Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.110 [INFO][6260] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.110 [INFO][6260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.155 [INFO][6267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" HandleID="k8s-pod-network.57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.157 [INFO][6267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.157 [INFO][6267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.174 [WARNING][6267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" HandleID="k8s-pod-network.57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.174 [INFO][6267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" HandleID="k8s-pod-network.57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Workload="ip--172--31--20--165-k8s-csi--node--driver--prlf8-eth0" Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.178 [INFO][6267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:26.184673 containerd[1969]: 2025-07-07 00:02:26.182 [INFO][6260] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00" Jul 7 00:02:26.187253 containerd[1969]: time="2025-07-07T00:02:26.185679334Z" level=info msg="TearDown network for sandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\" successfully" Jul 7 00:02:26.206408 containerd[1969]: time="2025-07-07T00:02:26.206183956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:02:26.206408 containerd[1969]: time="2025-07-07T00:02:26.206269053Z" level=info msg="RemovePodSandbox \"57c91d309410bcf7981d2b0fc2175b367c525045620a6e3e62cfd286fe388d00\" returns successfully" Jul 7 00:02:26.385479 containerd[1969]: time="2025-07-07T00:02:26.385265649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:26.420648 containerd[1969]: time="2025-07-07T00:02:26.418774321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:02:26.445014 containerd[1969]: time="2025-07-07T00:02:26.444884300Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:26.452671 containerd[1969]: time="2025-07-07T00:02:26.452622338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:26.454894 containerd[1969]: time="2025-07-07T00:02:26.454846831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 7.564919148s" Jul 7 00:02:26.454894 containerd[1969]: time="2025-07-07T00:02:26.454892749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:02:26.500341 containerd[1969]: time="2025-07-07T00:02:26.499826951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:02:26.759402 containerd[1969]: time="2025-07-07T00:02:26.758985306Z" level=info msg="CreateContainer within sandbox \"a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:02:26.917684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2186556455.mount: Deactivated successfully. Jul 7 00:02:26.947184 containerd[1969]: time="2025-07-07T00:02:26.947123390Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:26.947752 containerd[1969]: time="2025-07-07T00:02:26.947300385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:02:26.955592 containerd[1969]: time="2025-07-07T00:02:26.955522419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 455.640609ms" Jul 7 00:02:26.955592 containerd[1969]: time="2025-07-07T00:02:26.955576505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:02:26.989979 containerd[1969]: time="2025-07-07T00:02:26.988815346Z" level=info msg="CreateContainer within sandbox \"a0ac35253b01dda17bc4b30308be18fec35d03f24bea2f42cbe67daa8e6986b5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2d4376cd4cadfb6db91b1918f1e81b93025335fee42dc46ecf16829e823879cb\"" Jul 7 00:02:26.997443 containerd[1969]: time="2025-07-07T00:02:26.996904569Z" level=info msg="StartContainer for \"2d4376cd4cadfb6db91b1918f1e81b93025335fee42dc46ecf16829e823879cb\"" Jul 7 00:02:27.033182 containerd[1969]: time="2025-07-07T00:02:27.033054983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:02:27.036026 containerd[1969]: time="2025-07-07T00:02:27.035868823Z" level=info msg="CreateContainer within sandbox \"b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:02:27.097348 containerd[1969]: time="2025-07-07T00:02:27.097273050Z" level=info msg="CreateContainer within sandbox \"b88d48d250cfadf4748c9c1905d74d81b81496fa58a95288be8e6f4cb78ef09f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7a8eaf990fc1e308eabf7d27f8b813abcbe315419bc06acf481b92153681dad3\"" Jul 7 00:02:27.102700 containerd[1969]: time="2025-07-07T00:02:27.099154436Z" level=info msg="StartContainer for \"7a8eaf990fc1e308eabf7d27f8b813abcbe315419bc06acf481b92153681dad3\"" Jul 7 00:02:27.452503 systemd[1]: Started cri-containerd-7a8eaf990fc1e308eabf7d27f8b813abcbe315419bc06acf481b92153681dad3.scope - libcontainer container 7a8eaf990fc1e308eabf7d27f8b813abcbe315419bc06acf481b92153681dad3. Jul 7 00:02:27.461278 systemd[1]: Started cri-containerd-2d4376cd4cadfb6db91b1918f1e81b93025335fee42dc46ecf16829e823879cb.scope - libcontainer container 2d4376cd4cadfb6db91b1918f1e81b93025335fee42dc46ecf16829e823879cb. Jul 7 00:02:27.633383 containerd[1969]: time="2025-07-07T00:02:27.626317268Z" level=info msg="StartContainer for \"2d4376cd4cadfb6db91b1918f1e81b93025335fee42dc46ecf16829e823879cb\" returns successfully" Jul 7 00:02:27.644564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786133110.mount: Deactivated successfully. Jul 7 00:02:27.746962 containerd[1969]: time="2025-07-07T00:02:27.744915764Z" level=info msg="StartContainer for \"7a8eaf990fc1e308eabf7d27f8b813abcbe315419bc06acf481b92153681dad3\" returns successfully" Jul 7 00:02:29.006877 kubelet[3164]: I0707 00:02:28.991699 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68f9b8bfdc-qtjjz" podStartSLOduration=36.131394833 podStartE2EDuration="52.943423218s" podCreationTimestamp="2025-07-07 00:01:36 +0000 UTC" firstStartedPulling="2025-07-07 00:02:10.185359357 +0000 UTC m=+49.705155943" lastFinishedPulling="2025-07-07 00:02:26.997387763 +0000 UTC m=+66.517184328" observedRunningTime="2025-07-07 00:02:28.893814732 +0000 UTC m=+68.413611319" watchObservedRunningTime="2025-07-07 00:02:28.943423218 +0000 UTC m=+68.463219807" Jul 7 00:02:29.036678 kubelet[3164]: I0707 00:02:29.036231 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d9458bfc6-g2s8x" podStartSLOduration=31.255478695 podStartE2EDuration="48.036207132s" podCreationTimestamp="2025-07-07 00:01:41 +0000 UTC" firstStartedPulling="2025-07-07 00:02:09.692458274 +0000 UTC m=+49.212254851" lastFinishedPulling="2025-07-07 00:02:26.473186681 +0000 UTC m=+65.992983288" observedRunningTime="2025-07-07 00:02:28.926949544 +0000 UTC m=+68.446746131" watchObservedRunningTime="2025-07-07 00:02:29.036207132 +0000 UTC m=+68.556003718" Jul 7 00:02:29.076733 systemd[1]: Started sshd@12-172.31.20.165:22-147.75.109.163:46368.service - OpenSSH per-connection server daemon (147.75.109.163:46368). Jul 7 00:02:29.536277 sshd[6381]: Accepted publickey for core from 147.75.109.163 port 46368 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:29.542662 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:29.557648 systemd-logind[1958]: New session 13 of user core. Jul 7 00:02:29.564620 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:02:31.274097 sshd[6381]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:31.288815 systemd[1]: sshd@12-172.31.20.165:22-147.75.109.163:46368.service: Deactivated successfully. Jul 7 00:02:31.298115 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:02:31.319195 systemd-logind[1958]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:02:31.324120 systemd-logind[1958]: Removed session 13. Jul 7 00:02:31.930723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776261578.mount: Deactivated successfully. Jul 7 00:02:33.126668 containerd[1969]: time="2025-07-07T00:02:33.126615087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:33.186146 containerd[1969]: time="2025-07-07T00:02:33.129511969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:02:33.205430 containerd[1969]: time="2025-07-07T00:02:33.204991618Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:33.216349 containerd[1969]: time="2025-07-07T00:02:33.216279345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:33.221439 containerd[1969]: time="2025-07-07T00:02:33.221374476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 6.183493322s" Jul 7 00:02:33.221439 containerd[1969]: time="2025-07-07T00:02:33.221439729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:02:33.268388 containerd[1969]: time="2025-07-07T00:02:33.268161521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:02:33.313521 containerd[1969]: time="2025-07-07T00:02:33.313478862Z" level=info msg="CreateContainer within sandbox \"2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:02:33.360089 containerd[1969]: time="2025-07-07T00:02:33.360041413Z" level=info msg="CreateContainer within sandbox \"2c233590e380171664bb9f4319c80ead6b44901fa9e4a26a8fd394c28a144d26\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"da3ce326a8c246a67695920b4e1895c72f205cf258bf2faa80ada9617323fe3b\"" Jul 7 00:02:33.364468 containerd[1969]: time="2025-07-07T00:02:33.364412030Z" level=info msg="StartContainer for \"da3ce326a8c246a67695920b4e1895c72f205cf258bf2faa80ada9617323fe3b\"" Jul 7 00:02:33.535579 systemd[1]: Started cri-containerd-da3ce326a8c246a67695920b4e1895c72f205cf258bf2faa80ada9617323fe3b.scope - libcontainer container da3ce326a8c246a67695920b4e1895c72f205cf258bf2faa80ada9617323fe3b. Jul 7 00:02:33.764094 containerd[1969]: time="2025-07-07T00:02:33.763187961Z" level=info msg="StartContainer for \"da3ce326a8c246a67695920b4e1895c72f205cf258bf2faa80ada9617323fe3b\" returns successfully" Jul 7 00:02:34.438881 kubelet[3164]: I0707 00:02:34.434309 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-8f2qq" podStartSLOduration=30.934117886 podStartE2EDuration="53.419378736s" podCreationTimestamp="2025-07-07 00:01:41 +0000 UTC" firstStartedPulling="2025-07-07 00:02:10.773603012 +0000 UTC m=+50.293399589" lastFinishedPulling="2025-07-07 00:02:33.258863858 +0000 UTC m=+72.778660439" observedRunningTime="2025-07-07 00:02:34.418969585 +0000 UTC m=+73.938766172" watchObservedRunningTime="2025-07-07 00:02:34.419378736 +0000 UTC m=+73.939175320" Jul 7 00:02:35.861794 containerd[1969]: time="2025-07-07T00:02:35.861496274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:35.864489 containerd[1969]: time="2025-07-07T00:02:35.863049943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:02:35.867521 containerd[1969]: time="2025-07-07T00:02:35.867477774Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:35.870915 containerd[1969]: time="2025-07-07T00:02:35.870773062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:35.873351 containerd[1969]: time="2025-07-07T00:02:35.872104998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.603903418s" Jul 7 00:02:35.873351 containerd[1969]: time="2025-07-07T00:02:35.872743009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:02:35.898383 containerd[1969]: time="2025-07-07T00:02:35.898300346Z" level=info msg="CreateContainer within sandbox \"021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:02:35.923085 containerd[1969]: time="2025-07-07T00:02:35.923002174Z" level=info msg="CreateContainer within sandbox \"021dd6616844c4deedfc5267b4bd319130b9734b97b8179950c1e3bbd8c0c81b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2ecec1e1591d12ed51b4df18b2021059ee9dd7167bab4e4c01a7a94b3af37f77\"" Jul 7 00:02:35.924737 containerd[1969]: time="2025-07-07T00:02:35.924424071Z" level=info msg="StartContainer for \"2ecec1e1591d12ed51b4df18b2021059ee9dd7167bab4e4c01a7a94b3af37f77\"" Jul 7 00:02:36.211538 systemd[1]: Started cri-containerd-2ecec1e1591d12ed51b4df18b2021059ee9dd7167bab4e4c01a7a94b3af37f77.scope - libcontainer container 2ecec1e1591d12ed51b4df18b2021059ee9dd7167bab4e4c01a7a94b3af37f77. Jul 7 00:02:36.285218 containerd[1969]: time="2025-07-07T00:02:36.284411254Z" level=info msg="StartContainer for \"2ecec1e1591d12ed51b4df18b2021059ee9dd7167bab4e4c01a7a94b3af37f77\" returns successfully" Jul 7 00:02:36.370515 systemd[1]: Started sshd@13-172.31.20.165:22-147.75.109.163:55392.service - OpenSSH per-connection server daemon (147.75.109.163:55392). Jul 7 00:02:36.754171 sshd[6555]: Accepted publickey for core from 147.75.109.163 port 55392 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:36.764388 sshd[6555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:36.780445 systemd-logind[1958]: New session 14 of user core. Jul 7 00:02:36.787505 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:02:37.327219 kubelet[3164]: I0707 00:02:37.317114 3164 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:02:37.335437 kubelet[3164]: I0707 00:02:37.335391 3164 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:02:38.122530 sshd[6555]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:38.127661 systemd[1]: sshd@13-172.31.20.165:22-147.75.109.163:55392.service: Deactivated successfully. Jul 7 00:02:38.130321 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:02:38.131550 systemd-logind[1958]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:02:38.141608 systemd-logind[1958]: Removed session 14. Jul 7 00:02:41.998853 systemd[1]: run-containerd-runc-k8s.io-a326cff3160af8bc1da2f716ea9893cd5ea6a2073a1a4b4de87d055d6bf6dd0e-runc.vzXO7s.mount: Deactivated successfully. Jul 7 00:02:43.171715 kubelet[3164]: I0707 00:02:43.171634 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-prlf8" podStartSLOduration=32.360984821 podStartE2EDuration="1m2.17160481s" podCreationTimestamp="2025-07-07 00:01:41 +0000 UTC" firstStartedPulling="2025-07-07 00:02:06.064270461 +0000 UTC m=+45.584067026" lastFinishedPulling="2025-07-07 00:02:35.874890436 +0000 UTC m=+75.394687015" observedRunningTime="2025-07-07 00:02:36.610902151 +0000 UTC m=+76.130698738" watchObservedRunningTime="2025-07-07 00:02:43.17160481 +0000 UTC m=+82.691401395" Jul 7 00:02:43.180405 systemd[1]: Started sshd@14-172.31.20.165:22-147.75.109.163:55406.service - OpenSSH per-connection server daemon (147.75.109.163:55406). Jul 7 00:02:43.528047 sshd[6620]: Accepted publickey for core from 147.75.109.163 port 55406 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:43.533176 sshd[6620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:43.545397 systemd-logind[1958]: New session 15 of user core. Jul 7 00:02:43.552495 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:02:44.659504 sshd[6620]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:44.667665 systemd-logind[1958]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:02:44.668712 systemd[1]: sshd@14-172.31.20.165:22-147.75.109.163:55406.service: Deactivated successfully. Jul 7 00:02:44.675285 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:02:44.690429 systemd-logind[1958]: Removed session 15. Jul 7 00:02:44.700376 systemd[1]: Started sshd@15-172.31.20.165:22-147.75.109.163:55408.service - OpenSSH per-connection server daemon (147.75.109.163:55408). Jul 7 00:02:44.918582 sshd[6633]: Accepted publickey for core from 147.75.109.163 port 55408 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:44.920472 sshd[6633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:44.927316 systemd-logind[1958]: New session 16 of user core. Jul 7 00:02:44.931445 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:02:48.856062 sshd[6633]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:48.869555 systemd[1]: sshd@15-172.31.20.165:22-147.75.109.163:55408.service: Deactivated successfully. Jul 7 00:02:48.872617 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:02:48.873524 systemd-logind[1958]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:02:48.886807 systemd[1]: Started sshd@16-172.31.20.165:22-147.75.109.163:46522.service - OpenSSH per-connection server daemon (147.75.109.163:46522). Jul 7 00:02:48.887589 systemd-logind[1958]: Removed session 16. Jul 7 00:02:49.131166 sshd[6644]: Accepted publickey for core from 147.75.109.163 port 46522 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:49.132127 sshd[6644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:49.144485 systemd-logind[1958]: New session 17 of user core. Jul 7 00:02:49.148568 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:02:50.627852 sshd[6644]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:50.635122 systemd[1]: sshd@16-172.31.20.165:22-147.75.109.163:46522.service: Deactivated successfully. Jul 7 00:02:50.640010 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:02:50.641572 systemd-logind[1958]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:02:50.644886 systemd-logind[1958]: Removed session 17. Jul 7 00:02:50.672684 systemd[1]: Started sshd@17-172.31.20.165:22-147.75.109.163:46536.service - OpenSSH per-connection server daemon (147.75.109.163:46536). Jul 7 00:02:50.921432 sshd[6664]: Accepted publickey for core from 147.75.109.163 port 46536 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:50.923435 sshd[6664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:50.929292 systemd-logind[1958]: New session 18 of user core. Jul 7 00:02:50.943552 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:02:52.261162 sshd[6664]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:52.266281 systemd[1]: sshd@17-172.31.20.165:22-147.75.109.163:46536.service: Deactivated successfully. Jul 7 00:02:52.269108 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:02:52.270910 systemd-logind[1958]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:02:52.272572 systemd-logind[1958]: Removed session 18. Jul 7 00:02:52.291270 systemd[1]: Started sshd@18-172.31.20.165:22-147.75.109.163:46546.service - OpenSSH per-connection server daemon (147.75.109.163:46546). Jul 7 00:02:52.539898 sshd[6678]: Accepted publickey for core from 147.75.109.163 port 46546 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:52.541521 sshd[6678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:52.546507 systemd-logind[1958]: New session 19 of user core. Jul 7 00:02:52.556492 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:02:52.813011 sshd[6678]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:52.817889 systemd[1]: sshd@18-172.31.20.165:22-147.75.109.163:46546.service: Deactivated successfully. Jul 7 00:02:52.821578 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:02:52.825225 systemd-logind[1958]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:02:52.827045 systemd-logind[1958]: Removed session 19. Jul 7 00:02:57.845313 systemd[1]: Started sshd@19-172.31.20.165:22-147.75.109.163:47646.service - OpenSSH per-connection server daemon (147.75.109.163:47646). Jul 7 00:02:58.113446 sshd[6701]: Accepted publickey for core from 147.75.109.163 port 47646 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:02:58.115103 sshd[6701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:58.120544 systemd-logind[1958]: New session 20 of user core. Jul 7 00:02:58.131514 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:02:58.794792 sshd[6701]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:58.799716 systemd[1]: sshd@19-172.31.20.165:22-147.75.109.163:47646.service: Deactivated successfully. Jul 7 00:02:58.802059 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:02:58.803214 systemd-logind[1958]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:02:58.805354 systemd-logind[1958]: Removed session 20. Jul 7 00:03:03.845904 systemd[1]: Started sshd@20-172.31.20.165:22-147.75.109.163:47662.service - OpenSSH per-connection server daemon (147.75.109.163:47662). Jul 7 00:03:04.141344 sshd[6739]: Accepted publickey for core from 147.75.109.163 port 47662 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:03:04.145740 sshd[6739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:04.158442 systemd-logind[1958]: New session 21 of user core. Jul 7 00:03:04.162968 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:03:05.247641 sshd[6739]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:05.255358 systemd[1]: sshd@20-172.31.20.165:22-147.75.109.163:47662.service: Deactivated successfully. Jul 7 00:03:05.260777 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:03:05.262000 systemd-logind[1958]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:03:05.264523 systemd-logind[1958]: Removed session 21. Jul 7 00:03:06.710953 systemd[1]: run-containerd-runc-k8s.io-da3ce326a8c246a67695920b4e1895c72f205cf258bf2faa80ada9617323fe3b-runc.Da4ioE.mount: Deactivated successfully. Jul 7 00:03:08.779014 systemd[1]: run-containerd-runc-k8s.io-2d4376cd4cadfb6db91b1918f1e81b93025335fee42dc46ecf16829e823879cb-runc.VAM6du.mount: Deactivated successfully. Jul 7 00:03:10.308672 systemd[1]: Started sshd@21-172.31.20.165:22-147.75.109.163:35918.service - OpenSSH per-connection server daemon (147.75.109.163:35918). Jul 7 00:03:10.622403 sshd[6796]: Accepted publickey for core from 147.75.109.163 port 35918 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:03:10.627158 sshd[6796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:10.637428 systemd-logind[1958]: New session 22 of user core. Jul 7 00:03:10.643419 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:03:11.915516 sshd[6796]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:11.922670 systemd-logind[1958]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:03:11.923522 systemd[1]: sshd@21-172.31.20.165:22-147.75.109.163:35918.service: Deactivated successfully. Jul 7 00:03:11.930919 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:03:11.938529 systemd-logind[1958]: Removed session 22. Jul 7 00:03:12.117951 systemd[1]: run-containerd-runc-k8s.io-a326cff3160af8bc1da2f716ea9893cd5ea6a2073a1a4b4de87d055d6bf6dd0e-runc.v9TarZ.mount: Deactivated successfully. Jul 7 00:03:16.960648 systemd[1]: Started sshd@22-172.31.20.165:22-147.75.109.163:51580.service - OpenSSH per-connection server daemon (147.75.109.163:51580). Jul 7 00:03:17.281273 sshd[6831]: Accepted publickey for core from 147.75.109.163 port 51580 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:03:17.284872 sshd[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:17.291932 systemd-logind[1958]: New session 23 of user core. Jul 7 00:03:17.297230 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:03:18.698020 sshd[6831]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:18.706838 systemd[1]: sshd@22-172.31.20.165:22-147.75.109.163:51580.service: Deactivated successfully. Jul 7 00:03:18.710193 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:03:18.720190 systemd-logind[1958]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:03:18.722897 systemd-logind[1958]: Removed session 23. Jul 7 00:03:23.734588 systemd[1]: Started sshd@23-172.31.20.165:22-147.75.109.163:51582.service - OpenSSH per-connection server daemon (147.75.109.163:51582). Jul 7 00:03:24.013309 sshd[6846]: Accepted publickey for core from 147.75.109.163 port 51582 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:03:24.020618 sshd[6846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:24.029385 systemd-logind[1958]: New session 24 of user core. Jul 7 00:03:24.035452 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:03:25.267255 sshd[6846]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:25.276042 systemd-logind[1958]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:03:25.276943 systemd[1]: sshd@23-172.31.20.165:22-147.75.109.163:51582.service: Deactivated successfully. Jul 7 00:03:25.282340 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:03:25.285724 systemd-logind[1958]: Removed session 24. Jul 7 00:03:38.834571 systemd[1]: cri-containerd-df88e2a6f3b784ac3540936b6d041696f4b8a0238e71ea5c2972f597aa6c8e1a.scope: Deactivated successfully. Jul 7 00:03:38.834886 systemd[1]: cri-containerd-df88e2a6f3b784ac3540936b6d041696f4b8a0238e71ea5c2972f597aa6c8e1a.scope: Consumed 3.840s CPU time, 24.2M memory peak, 0B memory swap peak. Jul 7 00:03:39.016834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df88e2a6f3b784ac3540936b6d041696f4b8a0238e71ea5c2972f597aa6c8e1a-rootfs.mount: Deactivated successfully. Jul 7 00:03:39.098589 containerd[1969]: time="2025-07-07T00:03:39.056063490Z" level=info msg="shim disconnected" id=df88e2a6f3b784ac3540936b6d041696f4b8a0238e71ea5c2972f597aa6c8e1a namespace=k8s.io Jul 7 00:03:39.106030 containerd[1969]: time="2025-07-07T00:03:39.101310144Z" level=warning msg="cleaning up after shim disconnected" id=df88e2a6f3b784ac3540936b6d041696f4b8a0238e71ea5c2972f597aa6c8e1a namespace=k8s.io Jul 7 00:03:39.110640 containerd[1969]: time="2025-07-07T00:03:39.110573687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:03:39.205896 containerd[1969]: time="2025-07-07T00:03:39.205837850Z" level=warning msg="cleanup warnings time=\"2025-07-07T00:03:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 00:03:40.058523 systemd[1]: cri-containerd-8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161.scope: Deactivated successfully. Jul 7 00:03:40.058837 systemd[1]: cri-containerd-8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161.scope: Consumed 11.969s CPU time. Jul 7 00:03:40.092338 containerd[1969]: time="2025-07-07T00:03:40.088827780Z" level=info msg="shim disconnected" id=8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161 namespace=k8s.io Jul 7 00:03:40.092338 containerd[1969]: time="2025-07-07T00:03:40.088894297Z" level=warning msg="cleaning up after shim disconnected" id=8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161 namespace=k8s.io Jul 7 00:03:40.092338 containerd[1969]: time="2025-07-07T00:03:40.088910506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:03:40.094286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161-rootfs.mount: Deactivated successfully. Jul 7 00:03:40.295540 kubelet[3164]: I0707 00:03:40.295488 3164 scope.go:117] "RemoveContainer" containerID="df88e2a6f3b784ac3540936b6d041696f4b8a0238e71ea5c2972f597aa6c8e1a" Jul 7 00:03:40.304051 kubelet[3164]: I0707 00:03:40.300570 3164 scope.go:117] "RemoveContainer" containerID="8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161" Jul 7 00:03:40.373749 containerd[1969]: time="2025-07-07T00:03:40.373457095Z" level=info msg="CreateContainer within sandbox \"b8641c27fbea3704dbdb208d7dfdf913a6a522f6d3172d2f8d52853b6f385c04\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 7 00:03:40.374775 containerd[1969]: time="2025-07-07T00:03:40.373467142Z" level=info msg="CreateContainer within sandbox \"6b6e0ebe7521b8ea99ac8b101ce3739b0753664e0413c3ecbbcac0175b544d3e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 00:03:40.513899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644162062.mount: Deactivated successfully. Jul 7 00:03:40.525096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576360121.mount: Deactivated successfully. Jul 7 00:03:40.553149 containerd[1969]: time="2025-07-07T00:03:40.552979575Z" level=info msg="CreateContainer within sandbox \"6b6e0ebe7521b8ea99ac8b101ce3739b0753664e0413c3ecbbcac0175b544d3e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ecd1f32ff795616f500a5b0be13e3c51e04310297aa784c9e0ab0b0792d5c2a4\"" Jul 7 00:03:40.556090 containerd[1969]: time="2025-07-07T00:03:40.556040203Z" level=info msg="CreateContainer within sandbox \"b8641c27fbea3704dbdb208d7dfdf913a6a522f6d3172d2f8d52853b6f385c04\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0893b140626738228e0a44f43fbafd3c28e6ee41e50bbc76bd3a5599562809ba\"" Jul 7 00:03:40.557954 containerd[1969]: time="2025-07-07T00:03:40.557644700Z" level=info msg="StartContainer for \"ecd1f32ff795616f500a5b0be13e3c51e04310297aa784c9e0ab0b0792d5c2a4\"" Jul 7 00:03:40.557954 containerd[1969]: time="2025-07-07T00:03:40.557775387Z" level=info msg="StartContainer for \"0893b140626738228e0a44f43fbafd3c28e6ee41e50bbc76bd3a5599562809ba\"" Jul 7 00:03:40.615453 systemd[1]: Started cri-containerd-0893b140626738228e0a44f43fbafd3c28e6ee41e50bbc76bd3a5599562809ba.scope - libcontainer container 0893b140626738228e0a44f43fbafd3c28e6ee41e50bbc76bd3a5599562809ba. Jul 7 00:03:40.623449 systemd[1]: Started cri-containerd-ecd1f32ff795616f500a5b0be13e3c51e04310297aa784c9e0ab0b0792d5c2a4.scope - libcontainer container ecd1f32ff795616f500a5b0be13e3c51e04310297aa784c9e0ab0b0792d5c2a4. Jul 7 00:03:40.710439 containerd[1969]: time="2025-07-07T00:03:40.710253980Z" level=info msg="StartContainer for \"0893b140626738228e0a44f43fbafd3c28e6ee41e50bbc76bd3a5599562809ba\" returns successfully" Jul 7 00:03:40.721331 containerd[1969]: time="2025-07-07T00:03:40.721279005Z" level=info msg="StartContainer for \"ecd1f32ff795616f500a5b0be13e3c51e04310297aa784c9e0ab0b0792d5c2a4\" returns successfully" Jul 7 00:03:44.105947 kubelet[3164]: E0707 00:03:44.105804 3164 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-165?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jul 7 00:03:44.498405 systemd[1]: cri-containerd-db282ef5bc093132b558fa35afb751390bc2dceb27cddb3c93f93a4c090f4d69.scope: Deactivated successfully. Jul 7 00:03:44.498651 systemd[1]: cri-containerd-db282ef5bc093132b558fa35afb751390bc2dceb27cddb3c93f93a4c090f4d69.scope: Consumed 2.423s CPU time, 18.5M memory peak, 0B memory swap peak. Jul 7 00:03:44.551714 containerd[1969]: time="2025-07-07T00:03:44.551625013Z" level=info msg="shim disconnected" id=db282ef5bc093132b558fa35afb751390bc2dceb27cddb3c93f93a4c090f4d69 namespace=k8s.io Jul 7 00:03:44.551714 containerd[1969]: time="2025-07-07T00:03:44.551704717Z" level=warning msg="cleaning up after shim disconnected" id=db282ef5bc093132b558fa35afb751390bc2dceb27cddb3c93f93a4c090f4d69 namespace=k8s.io Jul 7 00:03:44.551714 containerd[1969]: time="2025-07-07T00:03:44.551717052Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:03:44.554042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db282ef5bc093132b558fa35afb751390bc2dceb27cddb3c93f93a4c090f4d69-rootfs.mount: Deactivated successfully. Jul 7 00:03:45.244892 kubelet[3164]: I0707 00:03:45.244847 3164 scope.go:117] "RemoveContainer" containerID="db282ef5bc093132b558fa35afb751390bc2dceb27cddb3c93f93a4c090f4d69" Jul 7 00:03:45.247539 containerd[1969]: time="2025-07-07T00:03:45.247500032Z" level=info msg="CreateContainer within sandbox \"01f5fea1a8c062811c19c4fea4c1910dff665a0b877c1ba80001b9a45e5e574b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 00:03:45.290426 containerd[1969]: time="2025-07-07T00:03:45.290359136Z" level=info msg="CreateContainer within sandbox \"01f5fea1a8c062811c19c4fea4c1910dff665a0b877c1ba80001b9a45e5e574b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"59669dbb75e292f39a46f6cc0e6c281ead305f25efaba57449216f44973339e3\"" Jul 7 00:03:45.290973 containerd[1969]: time="2025-07-07T00:03:45.290935084Z" level=info msg="StartContainer for \"59669dbb75e292f39a46f6cc0e6c281ead305f25efaba57449216f44973339e3\"" Jul 7 00:03:45.338436 systemd[1]: Started cri-containerd-59669dbb75e292f39a46f6cc0e6c281ead305f25efaba57449216f44973339e3.scope - libcontainer container 59669dbb75e292f39a46f6cc0e6c281ead305f25efaba57449216f44973339e3. Jul 7 00:03:45.406734 containerd[1969]: time="2025-07-07T00:03:45.406183258Z" level=info msg="StartContainer for \"59669dbb75e292f39a46f6cc0e6c281ead305f25efaba57449216f44973339e3\" returns successfully" Jul 7 00:03:52.446898 systemd[1]: cri-containerd-0893b140626738228e0a44f43fbafd3c28e6ee41e50bbc76bd3a5599562809ba.scope: Deactivated successfully. Jul 7 00:03:52.474490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0893b140626738228e0a44f43fbafd3c28e6ee41e50bbc76bd3a5599562809ba-rootfs.mount: Deactivated successfully. Jul 7 00:03:52.489806 containerd[1969]: time="2025-07-07T00:03:52.489467053Z" level=info msg="shim disconnected" id=0893b140626738228e0a44f43fbafd3c28e6ee41e50bbc76bd3a5599562809ba namespace=k8s.io Jul 7 00:03:52.489806 containerd[1969]: time="2025-07-07T00:03:52.489789490Z" level=warning msg="cleaning up after shim disconnected" id=0893b140626738228e0a44f43fbafd3c28e6ee41e50bbc76bd3a5599562809ba namespace=k8s.io Jul 7 00:03:52.490352 containerd[1969]: time="2025-07-07T00:03:52.489819911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:03:53.288537 kubelet[3164]: I0707 00:03:53.288454 3164 scope.go:117] "RemoveContainer" containerID="8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161" Jul 7 00:03:53.289175 kubelet[3164]: I0707 00:03:53.289135 3164 scope.go:117] "RemoveContainer" containerID="0893b140626738228e0a44f43fbafd3c28e6ee41e50bbc76bd3a5599562809ba" Jul 7 00:03:53.321680 kubelet[3164]: E0707 00:03:53.299958 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-lmh6g_tigera-operator(c03dc967-7a32-476e-8c39-68e044a7f679)\"" pod="tigera-operator/tigera-operator-747864d56d-lmh6g" podUID="c03dc967-7a32-476e-8c39-68e044a7f679" Jul 7 00:03:53.441460 containerd[1969]: time="2025-07-07T00:03:53.441402717Z" level=info msg="RemoveContainer for \"8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161\"" Jul 7 00:03:53.463460 containerd[1969]: time="2025-07-07T00:03:53.463383715Z" level=info msg="RemoveContainer for \"8174969591062305e08a15bce7c48096948ef42dd905b8752c4ae433c5924161\" returns successfully" Jul 7 00:03:54.129109 kubelet[3164]: E0707 00:03:54.129050 3164 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-165?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"