Jul 6 23:58:17.910619 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:58:17.910658 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:58:17.910678 kernel: BIOS-provided physical RAM map: Jul 6 23:58:17.910690 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:58:17.910701 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 6 23:58:17.910713 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jul 6 23:58:17.910728 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jul 6 23:58:17.910741 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 6 23:58:17.910753 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 6 23:58:17.910769 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 6 23:58:17.910782 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 6 23:58:17.910795 kernel: NX (Execute Disable) protection: active Jul 6 23:58:17.910807 kernel: APIC: Static calls initialized Jul 6 23:58:17.910820 kernel: efi: EFI v2.7 by EDK II Jul 6 23:58:17.910836 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 6 23:58:17.910854 kernel: SMBIOS 2.7 present. Jul 6 23:58:17.910868 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 6 23:58:17.910882 kernel: Hypervisor detected: KVM Jul 6 23:58:17.910896 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:58:17.910910 kernel: kvm-clock: using sched offset of 3714848422 cycles Jul 6 23:58:17.911017 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:58:17.911045 kernel: tsc: Detected 2499.994 MHz processor Jul 6 23:58:17.911058 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:58:17.911071 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:58:17.911086 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 6 23:58:17.911104 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:58:17.911119 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:58:17.911133 kernel: Using GB pages for direct mapping Jul 6 23:58:17.911147 kernel: Secure boot disabled Jul 6 23:58:17.911161 kernel: ACPI: Early table checksum verification disabled Jul 6 23:58:17.911175 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 6 23:58:17.911190 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 6 23:58:17.911204 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 6 23:58:17.911219 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 6 23:58:17.911236 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 6 23:58:17.911250 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 6 23:58:17.911264 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 6 23:58:17.911279 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 6 23:58:17.911293 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 6 23:58:17.911308 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 6 23:58:17.911328 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 6 23:58:17.911346 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 6 23:58:17.911362 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 6 23:58:17.911377 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 6 23:58:17.911391 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 6 23:58:17.911406 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 6 23:58:17.911421 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 6 23:58:17.911439 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 6 23:58:17.911454 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 6 23:58:17.911469 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 6 23:58:17.911484 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 6 23:58:17.911499 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 6 23:58:17.911514 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 6 23:58:17.911529 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 6 23:58:17.911544 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:58:17.911558 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:58:17.911573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 6 23:58:17.911592 kernel: NUMA: Initialized distance table, cnt=1 Jul 6 23:58:17.911606 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Jul 6 23:58:17.911621 kernel: Zone ranges: Jul 6 23:58:17.911636 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:58:17.911651 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 6 23:58:17.911665 kernel: Normal empty Jul 6 23:58:17.911679 kernel: Movable zone start for each node Jul 6 23:58:17.911693 kernel: Early memory node ranges Jul 6 23:58:17.911708 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:58:17.911726 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 6 23:58:17.911741 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 6 23:58:17.911755 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 6 23:58:17.911771 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:58:17.911786 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:58:17.911800 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 6 23:58:17.911814 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 6 23:58:17.911829 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 6 23:58:17.911843 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:58:17.911858 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 6 23:58:17.911876 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:58:17.911892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:58:17.911907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:58:17.911923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:58:17.911938 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:58:17.911953 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:58:17.911969 kernel: TSC deadline timer available Jul 6 23:58:17.911984 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:58:17.911999 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:58:17.912017 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 6 23:58:17.912429 kernel: Booting paravirtualized kernel on KVM Jul 6 23:58:17.912452 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:58:17.912469 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:58:17.912483 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:58:17.912499 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:58:17.912515 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:58:17.912528 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:58:17.912542 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:58:17.912563 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:58:17.912578 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:58:17.912591 kernel: random: crng init done Jul 6 23:58:17.912604 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:58:17.912626 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:58:17.912640 kernel: Fallback order for Node 0: 0 Jul 6 23:58:17.912653 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jul 6 23:58:17.912667 kernel: Policy zone: DMA32 Jul 6 23:58:17.915096 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:58:17.915121 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 162936K reserved, 0K cma-reserved) Jul 6 23:58:17.915138 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:58:17.915155 kernel: Kernel/User page tables isolation: enabled Jul 6 23:58:17.915171 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:58:17.915187 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:58:17.915203 kernel: Dynamic Preempt: voluntary Jul 6 23:58:17.915219 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:58:17.915236 kernel: rcu: RCU event tracing is enabled. Jul 6 23:58:17.915259 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:58:17.915275 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:58:17.915291 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:58:17.915307 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:58:17.915323 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:58:17.915338 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:58:17.915354 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 6 23:58:17.915386 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:58:17.915402 kernel: Console: colour dummy device 80x25 Jul 6 23:58:17.915420 kernel: printk: console [tty0] enabled Jul 6 23:58:17.915436 kernel: printk: console [ttyS0] enabled Jul 6 23:58:17.915456 kernel: ACPI: Core revision 20230628 Jul 6 23:58:17.915473 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 6 23:58:17.915490 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:58:17.915507 kernel: x2apic enabled Jul 6 23:58:17.915524 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:58:17.915542 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jul 6 23:58:17.915562 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Jul 6 23:58:17.915579 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:58:17.915596 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:58:17.915613 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:58:17.915630 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:58:17.915646 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:58:17.915663 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 6 23:58:17.915680 kernel: RETBleed: Vulnerable Jul 6 23:58:17.915696 kernel: Speculative Store Bypass: Vulnerable Jul 6 23:58:17.915716 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:58:17.915732 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:58:17.915749 kernel: GDS: Unknown: Dependent on hypervisor status Jul 6 23:58:17.915765 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:58:17.915782 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:58:17.915797 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:58:17.915814 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:58:17.915831 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 6 23:58:17.915847 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 6 23:58:17.915864 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 6 23:58:17.915879 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 6 23:58:17.915897 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 6 23:58:17.915914 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 6 23:58:17.915931 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:58:17.915947 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 6 23:58:17.915964 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 6 23:58:17.915980 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 6 23:58:17.915995 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 6 23:58:17.916012 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 6 23:58:17.916029 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 6 23:58:17.916138 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 6 23:58:17.916155 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:58:17.916171 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:58:17.916192 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:58:17.916208 kernel: landlock: Up and running. Jul 6 23:58:17.916224 kernel: SELinux: Initializing. Jul 6 23:58:17.916240 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:58:17.916257 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:58:17.916274 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 6 23:58:17.916290 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:58:17.916307 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:58:17.916324 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:58:17.916342 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 6 23:58:17.916361 kernel: signal: max sigframe size: 3632 Jul 6 23:58:17.916378 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:58:17.916395 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:58:17.916411 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:58:17.916428 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:58:17.916444 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:58:17.916461 kernel: .... node #0, CPUs: #1 Jul 6 23:58:17.916479 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 6 23:58:17.916493 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 6 23:58:17.916510 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:58:17.916525 kernel: smpboot: Max logical packages: 1 Jul 6 23:58:17.916539 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Jul 6 23:58:17.916555 kernel: devtmpfs: initialized Jul 6 23:58:17.916567 kernel: x86/mm: Memory block size: 128MB Jul 6 23:58:17.916579 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 6 23:58:17.916599 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:58:17.916620 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:58:17.916645 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:58:17.916664 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:58:17.916681 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:58:17.916694 kernel: audit: type=2000 audit(1751846297.836:1): state=initialized audit_enabled=0 res=1 Jul 6 23:58:17.916710 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:58:17.916726 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:58:17.916741 kernel: cpuidle: using governor menu Jul 6 23:58:17.916757 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:58:17.916773 kernel: dca service started, version 1.12.1 Jul 6 23:58:17.916791 kernel: PCI: Using configuration type 1 for base access Jul 6 23:58:17.916807 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:58:17.916823 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:58:17.916839 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:58:17.916854 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:58:17.916870 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:58:17.916886 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:58:17.916901 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:58:17.916916 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:58:17.916935 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 6 23:58:17.916950 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:58:17.916966 kernel: ACPI: Interpreter enabled Jul 6 23:58:17.916981 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:58:17.916997 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:58:17.917012 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:58:17.917028 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:58:17.919091 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 6 23:58:17.919109 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:58:17.919347 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:58:17.919497 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 6 23:58:17.919634 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 6 23:58:17.919654 kernel: acpiphp: Slot [3] registered Jul 6 23:58:17.919669 kernel: acpiphp: Slot [4] registered Jul 6 23:58:17.919685 kernel: acpiphp: Slot [5] registered Jul 6 23:58:17.919700 kernel: acpiphp: Slot [6] registered Jul 6 23:58:17.919716 kernel: acpiphp: Slot [7] registered Jul 6 23:58:17.919736 kernel: acpiphp: Slot [8] registered Jul 6 23:58:17.919751 kernel: acpiphp: Slot [9] registered Jul 6 23:58:17.919766 kernel: acpiphp: Slot [10] registered Jul 6 23:58:17.919782 kernel: acpiphp: Slot [11] registered Jul 6 23:58:17.919798 kernel: acpiphp: Slot [12] registered Jul 6 23:58:17.919813 kernel: acpiphp: Slot [13] registered Jul 6 23:58:17.919829 kernel: acpiphp: Slot [14] registered Jul 6 23:58:17.919844 kernel: acpiphp: Slot [15] registered Jul 6 23:58:17.919860 kernel: acpiphp: Slot [16] registered Jul 6 23:58:17.919878 kernel: acpiphp: Slot [17] registered Jul 6 23:58:17.919893 kernel: acpiphp: Slot [18] registered Jul 6 23:58:17.919909 kernel: acpiphp: Slot [19] registered Jul 6 23:58:17.919924 kernel: acpiphp: Slot [20] registered Jul 6 23:58:17.919940 kernel: acpiphp: Slot [21] registered Jul 6 23:58:17.919955 kernel: acpiphp: Slot [22] registered Jul 6 23:58:17.919970 kernel: acpiphp: Slot [23] registered Jul 6 23:58:17.919986 kernel: acpiphp: Slot [24] registered Jul 6 23:58:17.920001 kernel: acpiphp: Slot [25] registered Jul 6 23:58:17.920016 kernel: acpiphp: Slot [26] registered Jul 6 23:58:17.920048 kernel: acpiphp: Slot [27] registered Jul 6 23:58:17.920064 kernel: acpiphp: Slot [28] registered Jul 6 23:58:17.920079 kernel: acpiphp: Slot [29] registered Jul 6 23:58:17.920095 kernel: acpiphp: Slot [30] registered Jul 6 23:58:17.920110 kernel: acpiphp: Slot [31] registered Jul 6 23:58:17.920126 kernel: PCI host bridge to bus 0000:00 Jul 6 23:58:17.920266 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:58:17.920388 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:58:17.920510 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:58:17.920628 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 6 23:58:17.920744 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 6 23:58:17.920859 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:58:17.921008 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 6 23:58:17.925331 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 6 23:58:17.925510 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jul 6 23:58:17.925646 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 6 23:58:17.925780 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 6 23:58:17.925913 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 6 23:58:17.926059 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 6 23:58:17.926201 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 6 23:58:17.926349 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 6 23:58:17.926498 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 6 23:58:17.926653 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jul 6 23:58:17.926796 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jul 6 23:58:17.926947 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 6 23:58:17.927116 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jul 6 23:58:17.927263 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:58:17.927423 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 6 23:58:17.927587 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jul 6 23:58:17.927741 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 6 23:58:17.927896 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jul 6 23:58:17.927921 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:58:17.927937 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:58:17.927954 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:58:17.927970 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:58:17.927991 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 6 23:58:17.928008 kernel: iommu: Default domain type: Translated Jul 6 23:58:17.928025 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:58:17.928159 kernel: efivars: Registered efivars operations Jul 6 23:58:17.928175 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:58:17.928191 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:58:17.928207 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 6 23:58:17.928223 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 6 23:58:17.928394 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 6 23:58:17.928559 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 6 23:58:17.928701 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:58:17.928723 kernel: vgaarb: loaded Jul 6 23:58:17.928741 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 6 23:58:17.928757 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 6 23:58:17.928774 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:58:17.928791 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:58:17.928807 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:58:17.928823 kernel: pnp: PnP ACPI init Jul 6 23:58:17.928844 kernel: pnp: PnP ACPI: found 5 devices Jul 6 23:58:17.928861 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:58:17.928878 kernel: NET: Registered PF_INET protocol family Jul 6 23:58:17.928894 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:58:17.928912 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:58:17.928928 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:58:17.928945 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:58:17.928962 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:58:17.928982 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:58:17.928999 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:58:17.929016 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:58:17.929045 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:58:17.929059 kernel: NET: Registered PF_XDP protocol family Jul 6 23:58:17.929196 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:58:17.929325 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:58:17.929449 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:58:17.929572 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 6 23:58:17.929700 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 6 23:58:17.929845 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:58:17.929867 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:58:17.929885 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:58:17.929902 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jul 6 23:58:17.929919 kernel: clocksource: Switched to clocksource tsc Jul 6 23:58:17.929937 kernel: Initialise system trusted keyrings Jul 6 23:58:17.929954 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:58:17.929974 kernel: Key type asymmetric registered Jul 6 23:58:17.929990 kernel: Asymmetric key parser 'x509' registered Jul 6 23:58:17.930007 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:58:17.930024 kernel: io scheduler mq-deadline registered Jul 6 23:58:17.931143 kernel: io scheduler kyber registered Jul 6 23:58:17.931165 kernel: io scheduler bfq registered Jul 6 23:58:17.931181 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:58:17.931212 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:58:17.931227 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:58:17.931247 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:58:17.931260 kernel: i8042: Warning: Keylock active Jul 6 23:58:17.931276 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:58:17.931292 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:58:17.931473 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 6 23:58:17.931610 kernel: rtc_cmos 00:00: registered as rtc0 Jul 6 23:58:17.931740 kernel: rtc_cmos 00:00: setting system clock to 2025-07-06T23:58:17 UTC (1751846297) Jul 6 23:58:17.931862 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 6 23:58:17.931886 kernel: intel_pstate: CPU model not supported Jul 6 23:58:17.931902 kernel: efifb: probing for efifb Jul 6 23:58:17.931918 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jul 6 23:58:17.931934 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 6 23:58:17.931949 kernel: efifb: scrolling: redraw Jul 6 23:58:17.931964 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:58:17.931981 kernel: Console: switching to colour frame buffer device 100x37 Jul 6 23:58:17.931996 kernel: fb0: EFI VGA frame buffer device Jul 6 23:58:17.932011 kernel: pstore: Using crash dump compression: deflate Jul 6 23:58:17.932052 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:58:17.932068 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:58:17.932082 kernel: Segment Routing with IPv6 Jul 6 23:58:17.932098 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:58:17.932115 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:58:17.932129 kernel: Key type dns_resolver registered Jul 6 23:58:17.932144 kernel: IPI shorthand broadcast: enabled Jul 6 23:58:17.932185 kernel: sched_clock: Marking stable (456001819, 131462563)->(680333127, -92868745) Jul 6 23:58:17.932207 kernel: registered taskstats version 1 Jul 6 23:58:17.932228 kernel: Loading compiled-in X.509 certificates Jul 6 23:58:17.932245 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:58:17.932263 kernel: Key type .fscrypt registered Jul 6 23:58:17.932280 kernel: Key type fscrypt-provisioning registered Jul 6 23:58:17.932298 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:58:17.932316 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:58:17.932334 kernel: ima: No architecture policies found Jul 6 23:58:17.932351 kernel: clk: Disabling unused clocks Jul 6 23:58:17.932370 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:58:17.932391 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:58:17.932409 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:58:17.932426 kernel: Run /init as init process Jul 6 23:58:17.932444 kernel: with arguments: Jul 6 23:58:17.932461 kernel: /init Jul 6 23:58:17.932478 kernel: with environment: Jul 6 23:58:17.932495 kernel: HOME=/ Jul 6 23:58:17.932513 kernel: TERM=linux Jul 6 23:58:17.932530 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:58:17.932554 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:58:17.932575 systemd[1]: Detected virtualization amazon. Jul 6 23:58:17.932594 systemd[1]: Detected architecture x86-64. Jul 6 23:58:17.932612 systemd[1]: Running in initrd. Jul 6 23:58:17.932630 systemd[1]: No hostname configured, using default hostname. Jul 6 23:58:17.932647 systemd[1]: Hostname set to . Jul 6 23:58:17.932667 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:58:17.932688 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:58:17.932706 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:58:17.932725 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:58:17.932745 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:58:17.932763 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:58:17.932782 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:58:17.932801 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:58:17.932825 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:58:17.932844 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:58:17.932863 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:58:17.932882 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:58:17.932900 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:58:17.932922 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:58:17.932940 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:58:17.932958 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:58:17.932977 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:58:17.932996 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:58:17.933015 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:58:17.934718 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:58:17.934744 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:58:17.934771 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:58:17.934790 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:58:17.934809 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:58:17.934828 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:58:17.934847 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:58:17.934865 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:58:17.934885 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:58:17.934903 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:58:17.934930 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:58:17.934952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:58:17.935021 systemd-journald[178]: Collecting audit messages is disabled. Jul 6 23:58:17.936697 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:58:17.936718 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:58:17.936743 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:58:17.936766 systemd-journald[178]: Journal started Jul 6 23:58:17.936807 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2f95676c10d64e6caf5b8578a8befd) is 4.7M, max 38.2M, 33.4M free. Jul 6 23:58:17.928465 systemd-modules-load[179]: Inserted module 'overlay' Jul 6 23:58:17.949476 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:58:17.953078 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:58:17.954309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:58:17.965329 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:58:17.969520 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:58:17.971748 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:58:17.980444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:58:17.990705 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:58:17.995145 kernel: Bridge firewalling registered Jul 6 23:58:17.995137 systemd-modules-load[179]: Inserted module 'br_netfilter' Jul 6 23:58:17.998272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:58:18.005222 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:58:18.011763 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:58:18.013681 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:58:18.014729 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:58:18.026322 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:58:18.029610 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:58:18.030669 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:58:18.039247 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:58:18.046043 dracut-cmdline[209]: dracut-dracut-053 Jul 6 23:58:18.050779 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:58:18.089787 systemd-resolved[213]: Positive Trust Anchors: Jul 6 23:58:18.089841 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:58:18.089895 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:58:18.094080 systemd-resolved[213]: Defaulting to hostname 'linux'. Jul 6 23:58:18.095432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:58:18.098698 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:58:18.145078 kernel: SCSI subsystem initialized Jul 6 23:58:18.155064 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:58:18.166061 kernel: iscsi: registered transport (tcp) Jul 6 23:58:18.188085 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:58:18.188167 kernel: QLogic iSCSI HBA Driver Jul 6 23:58:18.228291 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:58:18.232265 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:58:18.267255 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:58:18.267333 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:58:18.270063 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:58:18.311069 kernel: raid6: avx512x4 gen() 18108 MB/s Jul 6 23:58:18.329064 kernel: raid6: avx512x2 gen() 18109 MB/s Jul 6 23:58:18.347063 kernel: raid6: avx512x1 gen() 18152 MB/s Jul 6 23:58:18.365062 kernel: raid6: avx2x4 gen() 17692 MB/s Jul 6 23:58:18.383066 kernel: raid6: avx2x2 gen() 17903 MB/s Jul 6 23:58:18.401378 kernel: raid6: avx2x1 gen() 13636 MB/s Jul 6 23:58:18.401448 kernel: raid6: using algorithm avx512x1 gen() 18152 MB/s Jul 6 23:58:18.420369 kernel: raid6: .... xor() 21670 MB/s, rmw enabled Jul 6 23:58:18.420442 kernel: raid6: using avx512x2 recovery algorithm Jul 6 23:58:18.442069 kernel: xor: automatically using best checksumming function avx Jul 6 23:58:18.602071 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:58:18.612796 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:58:18.616288 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:58:18.644764 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jul 6 23:58:18.650000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:58:18.661430 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:58:18.678325 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Jul 6 23:58:18.709780 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:58:18.716339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:58:18.772064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:58:18.781286 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:58:18.813146 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:58:18.815105 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:58:18.818111 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:58:18.818695 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:58:18.827344 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:58:18.855679 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:58:18.892028 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:58:18.909396 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:58:18.909478 kernel: AES CTR mode by8 optimization enabled Jul 6 23:58:18.915070 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 6 23:58:18.915372 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 6 23:58:18.922381 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:58:18.933846 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 6 23:58:18.934147 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 6 23:58:18.934349 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 6 23:58:18.934377 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:15:34:33:2b:51 Jul 6 23:58:18.924225 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:58:18.927418 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:58:18.928015 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:58:18.928279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:58:18.928895 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:58:18.946063 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 6 23:58:18.947540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:58:18.956261 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:58:18.956322 kernel: GPT:9289727 != 16777215 Jul 6 23:58:18.956341 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:58:18.957314 kernel: GPT:9289727 != 16777215 Jul 6 23:58:18.957361 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:58:18.957391 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:58:18.960216 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:58:18.979720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:58:18.987309 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:58:19.013017 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:58:19.030496 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (443) Jul 6 23:58:19.071059 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (460) Jul 6 23:58:19.106090 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:58:19.116758 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 6 23:58:19.123721 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 6 23:58:19.134496 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 6 23:58:19.135202 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 6 23:58:19.148283 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:58:19.154865 disk-uuid[628]: Primary Header is updated. Jul 6 23:58:19.154865 disk-uuid[628]: Secondary Entries is updated. Jul 6 23:58:19.154865 disk-uuid[628]: Secondary Header is updated. Jul 6 23:58:19.160132 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:58:19.167098 kernel: GPT:disk_guids don't match. Jul 6 23:58:19.167159 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:58:19.168334 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:58:19.175150 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:58:20.177153 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:58:20.177222 disk-uuid[629]: The operation has completed successfully. Jul 6 23:58:20.304643 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:58:20.304770 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:58:20.334282 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:58:20.338374 sh[970]: Success Jul 6 23:58:20.358057 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:58:20.456413 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:58:20.462155 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:58:20.466060 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:58:20.505380 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:58:20.505453 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:58:20.507383 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:58:20.509156 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:58:20.511460 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:58:20.604073 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:58:20.618354 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:58:20.620077 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:58:20.630296 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:58:20.633259 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:58:20.680060 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:58:20.680141 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:58:20.685797 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:58:20.692063 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:58:20.703210 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:58:20.707479 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:58:20.714097 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:58:20.723364 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:58:20.766847 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:58:20.773256 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:58:20.804739 systemd-networkd[1162]: lo: Link UP Jul 6 23:58:20.804752 systemd-networkd[1162]: lo: Gained carrier Jul 6 23:58:20.806455 systemd-networkd[1162]: Enumeration completed Jul 6 23:58:20.807217 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:58:20.807431 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:58:20.807436 systemd-networkd[1162]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:58:20.809393 systemd[1]: Reached target network.target - Network. Jul 6 23:58:20.811762 systemd-networkd[1162]: eth0: Link UP Jul 6 23:58:20.811772 systemd-networkd[1162]: eth0: Gained carrier Jul 6 23:58:20.811788 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:58:20.824816 systemd-networkd[1162]: eth0: DHCPv4 address 172.31.21.95/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:58:21.039480 ignition[1095]: Ignition 2.19.0 Jul 6 23:58:21.039495 ignition[1095]: Stage: fetch-offline Jul 6 23:58:21.039758 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:58:21.039771 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:58:21.040501 ignition[1095]: Ignition finished successfully Jul 6 23:58:21.042696 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:58:21.047258 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:58:21.062848 ignition[1171]: Ignition 2.19.0 Jul 6 23:58:21.062862 ignition[1171]: Stage: fetch Jul 6 23:58:21.063430 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:58:21.063454 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:58:21.063573 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:58:21.122061 ignition[1171]: PUT result: OK Jul 6 23:58:21.136461 ignition[1171]: parsed url from cmdline: "" Jul 6 23:58:21.136474 ignition[1171]: no config URL provided Jul 6 23:58:21.136483 ignition[1171]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:58:21.136495 ignition[1171]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:58:21.136515 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:58:21.139104 ignition[1171]: PUT result: OK Jul 6 23:58:21.139175 ignition[1171]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 6 23:58:21.139827 ignition[1171]: GET result: OK Jul 6 23:58:21.139913 ignition[1171]: parsing config with SHA512: 75604627dbb61433d7d8a1e30abff1d25a2662f62d133e78e1522f352dea1a9ddcd4b638af1b4a87d5693f343011b76823195c778f009500033f35b344b1d6ea Jul 6 23:58:21.143939 unknown[1171]: fetched base config from "system" Jul 6 23:58:21.144663 ignition[1171]: fetch: fetch complete Jul 6 23:58:21.143949 unknown[1171]: fetched base config from "system" Jul 6 23:58:21.144669 ignition[1171]: fetch: fetch passed Jul 6 23:58:21.143955 unknown[1171]: fetched user config from "aws" Jul 6 23:58:21.144725 ignition[1171]: Ignition finished successfully Jul 6 23:58:21.148195 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:58:21.152248 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:58:21.169131 ignition[1178]: Ignition 2.19.0 Jul 6 23:58:21.169145 ignition[1178]: Stage: kargs Jul 6 23:58:21.169646 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:58:21.169660 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:58:21.169790 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:58:21.170829 ignition[1178]: PUT result: OK Jul 6 23:58:21.173742 ignition[1178]: kargs: kargs passed Jul 6 23:58:21.173821 ignition[1178]: Ignition finished successfully Jul 6 23:58:21.175413 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:58:21.181263 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:58:21.196692 ignition[1184]: Ignition 2.19.0 Jul 6 23:58:21.196705 ignition[1184]: Stage: disks Jul 6 23:58:21.197209 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:58:21.197223 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:58:21.197341 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:58:21.198230 ignition[1184]: PUT result: OK Jul 6 23:58:21.201004 ignition[1184]: disks: disks passed Jul 6 23:58:21.201105 ignition[1184]: Ignition finished successfully Jul 6 23:58:21.202895 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:58:21.203677 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:58:21.204058 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:58:21.204581 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:58:21.205107 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:58:21.205630 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:58:21.211258 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:58:21.242438 systemd-fsck[1192]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:58:21.245240 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:58:21.248197 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:58:21.346060 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:58:21.346708 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:58:21.348070 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:58:21.354162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:58:21.358174 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:58:21.359347 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:58:21.359421 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:58:21.359454 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:58:21.373058 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1211) Jul 6 23:58:21.375726 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:58:21.379535 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:58:21.379563 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:58:21.379583 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:58:21.382323 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:58:21.386249 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:58:21.387920 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:58:21.708632 initrd-setup-root[1235]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:58:21.758354 initrd-setup-root[1242]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:58:21.773881 initrd-setup-root[1249]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:58:21.778555 initrd-setup-root[1256]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:58:21.930183 systemd-networkd[1162]: eth0: Gained IPv6LL Jul 6 23:58:22.021090 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:58:22.028149 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:58:22.033338 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:58:22.039605 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:58:22.041677 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:58:22.071016 ignition[1325]: INFO : Ignition 2.19.0 Jul 6 23:58:22.072770 ignition[1325]: INFO : Stage: mount Jul 6 23:58:22.072770 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:58:22.072770 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:58:22.072770 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:58:22.075051 ignition[1325]: INFO : PUT result: OK Jul 6 23:58:22.077966 ignition[1325]: INFO : mount: mount passed Jul 6 23:58:22.078676 ignition[1325]: INFO : Ignition finished successfully Jul 6 23:58:22.081189 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:58:22.087175 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:58:22.090791 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:58:22.099263 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:58:22.129057 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1337) Jul 6 23:58:22.129113 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:58:22.132810 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:58:22.132900 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:58:22.139438 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:58:22.141780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:58:22.167614 ignition[1354]: INFO : Ignition 2.19.0 Jul 6 23:58:22.167614 ignition[1354]: INFO : Stage: files Jul 6 23:58:22.168932 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:58:22.168932 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:58:22.168932 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:58:22.170217 ignition[1354]: INFO : PUT result: OK Jul 6 23:58:22.172541 ignition[1354]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:58:22.173455 ignition[1354]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:58:22.173455 ignition[1354]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:58:22.189338 ignition[1354]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:58:22.190136 ignition[1354]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:58:22.190136 ignition[1354]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:58:22.189897 unknown[1354]: wrote ssh authorized keys file for user: core Jul 6 23:58:22.201757 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:58:22.202604 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 6 23:58:22.296595 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:58:22.490686 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:58:22.490686 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:58:22.492640 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 6 23:58:23.093719 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:58:23.561236 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:58:23.561236 ignition[1354]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:58:23.564726 ignition[1354]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:58:23.566060 ignition[1354]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:58:23.566060 ignition[1354]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:58:23.566060 ignition[1354]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:58:23.566060 ignition[1354]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:58:23.566060 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:58:23.566060 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:58:23.566060 ignition[1354]: INFO : files: files passed Jul 6 23:58:23.566060 ignition[1354]: INFO : Ignition finished successfully Jul 6 23:58:23.567794 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:58:23.575350 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:58:23.577218 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:58:23.583889 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:58:23.584016 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:58:23.604357 initrd-setup-root-after-ignition[1387]: grep: Jul 6 23:58:23.605431 initrd-setup-root-after-ignition[1383]: grep: Jul 6 23:58:23.606222 initrd-setup-root-after-ignition[1387]: /sysroot/etc/flatcar/enabled-sysext.conf Jul 6 23:58:23.606818 initrd-setup-root-after-ignition[1383]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:58:23.606529 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:58:23.609587 initrd-setup-root-after-ignition[1387]: : No such file or directory Jul 6 23:58:23.607611 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:58:23.610828 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:58:23.618616 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:58:23.652116 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:58:23.652278 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:58:23.653419 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:58:23.654487 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:58:23.655406 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:58:23.657022 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:58:23.677024 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:58:23.683250 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:58:23.694749 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:58:23.695689 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:58:23.696652 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:58:23.697473 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:58:23.697695 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:58:23.698792 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:58:23.699775 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:58:23.700542 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:58:23.701305 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:58:23.702021 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:58:23.702751 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:58:23.703626 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:58:23.704397 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:58:23.705520 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:58:23.706263 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:58:23.707115 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:58:23.707298 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:58:23.708329 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:58:23.709111 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:58:23.709764 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:58:23.709906 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:58:23.710599 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:58:23.710816 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:58:23.711908 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:58:23.712113 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:58:23.712733 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:58:23.712885 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:58:23.724740 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:58:23.725440 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:58:23.725660 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:58:23.729826 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:58:23.731190 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:58:23.732225 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:58:23.733990 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:58:23.735112 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:58:23.740643 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:58:23.741587 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:58:23.751151 ignition[1407]: INFO : Ignition 2.19.0 Jul 6 23:58:23.751151 ignition[1407]: INFO : Stage: umount Jul 6 23:58:23.752755 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:58:23.752755 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:58:23.752755 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:58:23.755133 ignition[1407]: INFO : PUT result: OK Jul 6 23:58:23.757210 ignition[1407]: INFO : umount: umount passed Jul 6 23:58:23.757785 ignition[1407]: INFO : Ignition finished successfully Jul 6 23:58:23.759667 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:58:23.759796 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:58:23.760583 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:58:23.760641 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:58:23.761201 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:58:23.761261 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:58:23.762242 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:58:23.762319 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:58:23.762804 systemd[1]: Stopped target network.target - Network. Jul 6 23:58:23.764101 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:58:23.764168 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:58:23.765504 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:58:23.765953 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:58:23.767019 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:58:23.767988 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:58:23.768919 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:58:23.769915 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:58:23.769974 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:58:23.770995 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:58:23.771073 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:58:23.771662 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:58:23.771733 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:58:23.772262 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:58:23.772337 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:58:23.773060 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:58:23.774252 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:58:23.776804 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:58:23.778347 systemd-networkd[1162]: eth0: DHCPv6 lease lost Jul 6 23:58:23.781003 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:58:23.782473 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:58:23.783528 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:58:23.783575 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:58:23.786330 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:58:23.786842 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:58:23.786932 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:58:23.787654 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:58:23.789650 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:58:23.790802 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:58:23.796538 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:58:23.796659 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:58:23.799020 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:58:23.799132 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:58:23.799659 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:58:23.799719 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:58:23.801764 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:58:23.801965 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:58:23.807151 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:58:23.807239 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:58:23.809160 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:58:23.809754 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:58:23.811099 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:58:23.811167 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:58:23.812352 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:58:23.812419 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:58:23.813993 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:58:23.814123 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:58:23.818012 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:58:23.819365 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:58:23.819477 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:58:23.821942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:58:23.822651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:58:23.824365 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:58:23.825249 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:58:23.833221 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:58:23.833349 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:58:23.916951 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:58:23.917100 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:58:23.918386 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:58:23.918834 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:58:23.919073 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:58:23.923265 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:58:23.932833 systemd[1]: Switching root. Jul 6 23:58:23.964100 systemd-journald[178]: Journal stopped Jul 6 23:58:25.496956 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jul 6 23:58:25.506112 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:58:25.506152 kernel: SELinux: policy capability open_perms=1 Jul 6 23:58:25.506175 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:58:25.506201 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:58:25.506226 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:58:25.506246 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:58:25.506267 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:58:25.506287 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:58:25.506309 kernel: audit: type=1403 audit(1751846304.329:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:58:25.506343 systemd[1]: Successfully loaded SELinux policy in 65.739ms. Jul 6 23:58:25.506378 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.722ms. Jul 6 23:58:25.506404 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:58:25.506429 systemd[1]: Detected virtualization amazon. Jul 6 23:58:25.506451 systemd[1]: Detected architecture x86-64. Jul 6 23:58:25.506472 systemd[1]: Detected first boot. Jul 6 23:58:25.506494 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:58:25.506517 zram_generator::config[1450]: No configuration found. Jul 6 23:58:25.506539 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:58:25.506561 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:58:25.506583 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:58:25.506607 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:58:25.506632 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:58:25.506654 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:58:25.506676 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:58:25.506698 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:58:25.506720 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:58:25.506743 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:58:25.506765 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:58:25.506787 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:58:25.506811 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:58:25.506833 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:58:25.506855 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:58:25.506877 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:58:25.506907 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:58:25.506929 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:58:25.506951 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:58:25.506974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:58:25.506996 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:58:25.507020 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:58:25.507056 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:58:25.507078 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:58:25.507100 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:58:25.507128 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:58:25.507149 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:58:25.507171 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:58:25.507193 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:58:25.507219 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:58:25.507241 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:58:25.507264 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:58:25.507286 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:58:25.507308 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:58:25.507336 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:58:25.507358 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:58:25.507380 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:58:25.507402 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:58:25.507428 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:58:25.507450 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:58:25.507471 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:58:25.507494 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:58:25.507516 systemd[1]: Reached target machines.target - Containers. Jul 6 23:58:25.507538 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:58:25.507560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:58:25.507582 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:58:25.507608 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:58:25.507637 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:58:25.507659 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:58:25.507681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:58:25.507705 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:58:25.507727 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:58:25.507750 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:58:25.507772 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:58:25.507797 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:58:25.507819 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:58:25.507841 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:58:25.507864 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:58:25.507885 kernel: loop: module loaded Jul 6 23:58:25.507906 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:58:25.507928 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:58:25.507950 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:58:25.507972 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:58:25.507998 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:58:25.508020 systemd[1]: Stopped verity-setup.service. Jul 6 23:58:25.508824 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:58:25.508856 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:58:25.508879 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:58:25.508901 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:58:25.508924 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:58:25.508947 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:58:25.508975 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:58:25.508997 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:58:25.509020 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:58:25.509054 kernel: fuse: init (API version 7.39) Jul 6 23:58:25.509076 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:58:25.509098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:58:25.509124 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:58:25.509147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:58:25.509169 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:58:25.509191 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:58:25.509213 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:58:25.509270 systemd-journald[1528]: Collecting audit messages is disabled. Jul 6 23:58:25.509311 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:58:25.509341 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:58:25.509364 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:58:25.509388 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:58:25.509410 systemd-journald[1528]: Journal started Jul 6 23:58:25.509459 systemd-journald[1528]: Runtime Journal (/run/log/journal/ec2f95676c10d64e6caf5b8578a8befd) is 4.7M, max 38.2M, 33.4M free. Jul 6 23:58:25.121612 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:58:25.156360 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 6 23:58:25.156782 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:58:25.518926 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:58:25.524601 kernel: ACPI: bus type drm_connector registered Jul 6 23:58:25.524684 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:58:25.530791 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:58:25.537378 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:58:25.547298 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:58:25.547380 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:58:25.551181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:58:25.560060 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:58:25.569058 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:58:25.575058 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:58:25.580066 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:58:25.599710 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:58:25.599819 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:58:25.605062 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:58:25.609694 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:58:25.611530 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:58:25.611757 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:58:25.613641 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:58:25.614408 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:58:25.616480 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:58:25.618672 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:58:25.634821 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:58:25.667306 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:58:25.679148 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:58:25.687933 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:58:25.700287 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:58:25.710083 kernel: loop0: detected capacity change from 0 to 61336 Jul 6 23:58:25.707471 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:58:25.710072 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:58:25.712451 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:58:25.727460 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:58:25.760742 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:58:25.765768 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:58:25.769056 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:58:25.777441 systemd-journald[1528]: Time spent on flushing to /var/log/journal/ec2f95676c10d64e6caf5b8578a8befd is 59.947ms for 994 entries. Jul 6 23:58:25.777441 systemd-journald[1528]: System Journal (/var/log/journal/ec2f95676c10d64e6caf5b8578a8befd) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:58:25.850169 systemd-journald[1528]: Received client request to flush runtime journal. Jul 6 23:58:25.850234 kernel: loop1: detected capacity change from 0 to 229808 Jul 6 23:58:25.810277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:58:25.822678 udevadm[1588]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:58:25.853351 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:58:25.875083 kernel: loop2: detected capacity change from 0 to 142488 Jul 6 23:58:25.875249 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:58:25.885288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:58:25.915458 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jul 6 23:58:25.915801 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jul 6 23:58:25.921425 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:58:25.977283 kernel: loop3: detected capacity change from 0 to 140768 Jul 6 23:58:26.073075 kernel: loop4: detected capacity change from 0 to 61336 Jul 6 23:58:26.106062 kernel: loop5: detected capacity change from 0 to 229808 Jul 6 23:58:26.144058 kernel: loop6: detected capacity change from 0 to 142488 Jul 6 23:58:26.184073 kernel: loop7: detected capacity change from 0 to 140768 Jul 6 23:58:26.220775 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 6 23:58:26.221528 (sd-merge)[1604]: Merged extensions into '/usr'. Jul 6 23:58:26.228534 systemd[1]: Reloading requested from client PID 1560 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:58:26.228703 systemd[1]: Reloading... Jul 6 23:58:26.379109 zram_generator::config[1633]: No configuration found. Jul 6 23:58:26.590656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:58:26.694212 systemd[1]: Reloading finished in 464 ms. Jul 6 23:58:26.727806 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:58:26.737314 systemd[1]: Starting ensure-sysext.service... Jul 6 23:58:26.740867 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:58:26.760123 systemd[1]: Reloading requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:58:26.762167 systemd[1]: Reloading... Jul 6 23:58:26.807661 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:58:26.808694 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:58:26.810306 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:58:26.810867 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jul 6 23:58:26.811081 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jul 6 23:58:26.819758 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:58:26.819935 systemd-tmpfiles[1682]: Skipping /boot Jul 6 23:58:26.853422 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:58:26.853606 systemd-tmpfiles[1682]: Skipping /boot Jul 6 23:58:26.895065 zram_generator::config[1713]: No configuration found. Jul 6 23:58:27.038742 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:58:27.101778 ldconfig[1556]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:58:27.101461 systemd[1]: Reloading finished in 338 ms. Jul 6 23:58:27.116305 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:58:27.117107 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:58:27.123619 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:58:27.142503 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:58:27.147433 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:58:27.150616 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:58:27.161926 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:58:27.169666 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:58:27.172306 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:58:27.178765 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:58:27.179884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:58:27.193434 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:58:27.202719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:58:27.207495 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:58:27.208269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:58:27.212371 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:58:27.212993 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:58:27.216952 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:58:27.217262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:58:27.217506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:58:27.217658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:58:27.230431 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:58:27.230934 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:58:27.240467 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:58:27.241239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:58:27.241519 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:58:27.243296 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:58:27.244631 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:58:27.246979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:58:27.247183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:58:27.255484 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:58:27.255695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:58:27.256869 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:58:27.259192 systemd[1]: Finished ensure-sysext.service. Jul 6 23:58:27.260713 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:58:27.260905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:58:27.273320 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:58:27.273456 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:58:27.285328 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:58:27.296473 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:58:27.297337 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:58:27.304491 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:58:27.306177 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:58:27.309450 systemd-udevd[1776]: Using default interface naming scheme 'v255'. Jul 6 23:58:27.319897 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:58:27.342367 augenrules[1801]: No rules Jul 6 23:58:27.343688 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:58:27.351620 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:58:27.375540 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:58:27.386323 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:58:27.432557 systemd-resolved[1770]: Positive Trust Anchors: Jul 6 23:58:27.433106 systemd-resolved[1770]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:58:27.433208 systemd-resolved[1770]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:58:27.453562 systemd-resolved[1770]: Defaulting to hostname 'linux'. Jul 6 23:58:27.455878 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:58:27.456590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:58:27.461658 systemd-networkd[1816]: lo: Link UP Jul 6 23:58:27.461670 systemd-networkd[1816]: lo: Gained carrier Jul 6 23:58:27.467916 systemd-networkd[1816]: Enumeration completed Jul 6 23:58:27.468147 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:58:27.468611 systemd[1]: Reached target network.target - Network. Jul 6 23:58:27.478196 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:58:27.502282 (udev-worker)[1821]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:58:27.517010 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:58:27.547056 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 6 23:58:27.550060 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:58:27.555886 systemd-networkd[1816]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:58:27.556173 systemd-networkd[1816]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:58:27.559670 systemd-networkd[1816]: eth0: Link UP Jul 6 23:58:27.559959 systemd-networkd[1816]: eth0: Gained carrier Jul 6 23:58:27.560592 systemd-networkd[1816]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:58:27.571239 systemd-networkd[1816]: eth0: DHCPv4 address 172.31.21.95/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:58:27.574950 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:58:27.575015 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:58:27.577071 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 6 23:58:27.595055 kernel: ACPI: button: Sleep Button [SLPF] Jul 6 23:58:27.598057 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1826) Jul 6 23:58:27.601072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:58:27.667520 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:58:27.738622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:58:27.744318 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:58:27.745605 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:58:27.746770 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:58:27.755356 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:58:27.762799 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:58:27.783476 lvm[1931]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:58:27.813967 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:58:27.814670 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:58:27.815249 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:58:27.815725 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:58:27.816150 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:58:27.816647 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:58:27.817215 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:58:27.817557 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:58:27.817877 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:58:27.817907 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:58:27.818238 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:58:27.820093 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:58:27.821841 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:58:27.828161 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:58:27.830071 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:58:27.831601 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:58:27.832383 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:58:27.833005 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:58:27.833666 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:58:27.833709 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:58:27.840179 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:58:27.847712 lvm[1937]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:58:27.853684 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:58:27.860290 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:58:27.863310 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:58:27.867285 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:58:27.870142 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:58:27.876323 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:58:27.880323 systemd[1]: Started ntpd.service - Network Time Service. Jul 6 23:58:27.886191 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:58:27.890684 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 6 23:58:27.900436 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:58:27.929308 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:58:27.947541 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:58:27.949254 jq[1941]: false Jul 6 23:58:27.949729 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:58:27.951305 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:58:27.953290 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:58:27.958241 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:58:27.962676 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:58:27.973757 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:58:27.974619 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:58:27.982409 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:58:27.982774 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:58:27.989568 jq[1955]: true Jul 6 23:58:28.058481 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:58:28.058731 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:58:28.062737 extend-filesystems[1942]: Found loop4 Jul 6 23:58:28.068684 ntpd[1944]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:38 UTC 2025 (1): Starting Jul 6 23:58:28.075170 extend-filesystems[1942]: Found loop5 Jul 6 23:58:28.075170 extend-filesystems[1942]: Found loop6 Jul 6 23:58:28.075170 extend-filesystems[1942]: Found loop7 Jul 6 23:58:28.075170 extend-filesystems[1942]: Found nvme0n1 Jul 6 23:58:28.075170 extend-filesystems[1942]: Found nvme0n1p1 Jul 6 23:58:28.075170 extend-filesystems[1942]: Found nvme0n1p2 Jul 6 23:58:28.075170 extend-filesystems[1942]: Found nvme0n1p3 Jul 6 23:58:28.075170 extend-filesystems[1942]: Found usr Jul 6 23:58:28.075170 extend-filesystems[1942]: Found nvme0n1p4 Jul 6 23:58:28.075170 extend-filesystems[1942]: Found nvme0n1p6 Jul 6 23:58:28.075170 extend-filesystems[1942]: Found nvme0n1p7 Jul 6 23:58:28.075170 extend-filesystems[1942]: Found nvme0n1p9 Jul 6 23:58:28.075170 extend-filesystems[1942]: Checking size of /dev/nvme0n1p9 Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:38 UTC 2025 (1): Starting Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: ---------------------------------------------------- Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: corporation. Support and training for ntp-4 are Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: available at https://www.nwtime.org/support Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: ---------------------------------------------------- Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: proto: precision = 0.063 usec (-24) Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: basedate set to 2025-06-24 Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: gps base set to 2025-06-29 (week 2373) Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: Listen normally on 3 eth0 172.31.21.95:123 Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: Listen normally on 4 lo [::1]:123 Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: bind(21) AF_INET6 fe80::415:34ff:fe33:2b51%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: unable to create socket on eth0 (5) for fe80::415:34ff:fe33:2b51%2#123 Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: failed to init interface for address fe80::415:34ff:fe33:2b51%2 Jul 6 23:58:28.136369 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: Listening on routing socket on fd #21 for interface updates Jul 6 23:58:28.074518 (ntainerd)[1975]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:58:28.068721 ntpd[1944]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:58:28.148513 jq[1965]: true Jul 6 23:58:28.148624 tar[1959]: linux-amd64/LICENSE Jul 6 23:58:28.148624 tar[1959]: linux-amd64/helm Jul 6 23:58:28.122150 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:58:28.068731 ntpd[1944]: ---------------------------------------------------- Jul 6 23:58:28.068741 ntpd[1944]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:58:28.068751 ntpd[1944]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:58:28.068760 ntpd[1944]: corporation. Support and training for ntp-4 are Jul 6 23:58:28.068770 ntpd[1944]: available at https://www.nwtime.org/support Jul 6 23:58:28.068780 ntpd[1944]: ---------------------------------------------------- Jul 6 23:58:28.079143 ntpd[1944]: proto: precision = 0.063 usec (-24) Jul 6 23:58:28.086294 ntpd[1944]: basedate set to 2025-06-24 Jul 6 23:58:28.086318 ntpd[1944]: gps base set to 2025-06-29 (week 2373) Jul 6 23:58:28.114138 ntpd[1944]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:58:28.114205 ntpd[1944]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:58:28.114434 ntpd[1944]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:58:28.114487 ntpd[1944]: Listen normally on 3 eth0 172.31.21.95:123 Jul 6 23:58:28.114532 ntpd[1944]: Listen normally on 4 lo [::1]:123 Jul 6 23:58:28.114585 ntpd[1944]: bind(21) AF_INET6 fe80::415:34ff:fe33:2b51%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:58:28.114611 ntpd[1944]: unable to create socket on eth0 (5) for fe80::415:34ff:fe33:2b51%2#123 Jul 6 23:58:28.114628 ntpd[1944]: failed to init interface for address fe80::415:34ff:fe33:2b51%2 Jul 6 23:58:28.114664 ntpd[1944]: Listening on routing socket on fd #21 for interface updates Jul 6 23:58:28.121859 dbus-daemon[1940]: [system] SELinux support is enabled Jul 6 23:58:28.152061 extend-filesystems[1942]: Resized partition /dev/nvme0n1p9 Jul 6 23:58:28.151896 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:58:28.155233 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:58:28.155281 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:58:28.158166 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:58:28.158202 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:58:28.160797 ntpd[1944]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:58:28.160844 ntpd[1944]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:58:28.160946 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:58:28.160946 ntpd[1944]: 6 Jul 23:58:28 ntpd[1944]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:58:28.162450 update_engine[1954]: I20250706 23:58:28.162336 1954 main.cc:92] Flatcar Update Engine starting Jul 6 23:58:28.163288 dbus-daemon[1940]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1816 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 6 23:58:28.168084 extend-filesystems[1992]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:58:28.175174 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 6 23:58:28.173600 dbus-daemon[1940]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:58:28.202146 update_engine[1954]: I20250706 23:58:28.190279 1954 update_check_scheduler.cc:74] Next update check in 3m2s Jul 6 23:58:28.189265 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 6 23:58:28.195085 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:58:28.202739 coreos-metadata[1939]: Jul 06 23:58:28.202 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:58:28.204258 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:58:28.213068 coreos-metadata[1939]: Jul 06 23:58:28.210 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 6 23:58:28.229565 coreos-metadata[1939]: Jul 06 23:58:28.228 INFO Fetch successful Jul 6 23:58:28.229565 coreos-metadata[1939]: Jul 06 23:58:28.228 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 6 23:58:28.232091 coreos-metadata[1939]: Jul 06 23:58:28.231 INFO Fetch successful Jul 6 23:58:28.232091 coreos-metadata[1939]: Jul 06 23:58:28.231 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 6 23:58:28.233063 coreos-metadata[1939]: Jul 06 23:58:28.232 INFO Fetch successful Jul 6 23:58:28.233063 coreos-metadata[1939]: Jul 06 23:58:28.232 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 6 23:58:28.240504 coreos-metadata[1939]: Jul 06 23:58:28.240 INFO Fetch successful Jul 6 23:58:28.240504 coreos-metadata[1939]: Jul 06 23:58:28.240 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 6 23:58:28.244987 coreos-metadata[1939]: Jul 06 23:58:28.244 INFO Fetch failed with 404: resource not found Jul 6 23:58:28.244987 coreos-metadata[1939]: Jul 06 23:58:28.244 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 6 23:58:28.248254 coreos-metadata[1939]: Jul 06 23:58:28.248 INFO Fetch successful Jul 6 23:58:28.248254 coreos-metadata[1939]: Jul 06 23:58:28.248 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 6 23:58:28.252120 coreos-metadata[1939]: Jul 06 23:58:28.252 INFO Fetch successful Jul 6 23:58:28.252224 coreos-metadata[1939]: Jul 06 23:58:28.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 6 23:58:28.252534 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 6 23:58:28.256439 coreos-metadata[1939]: Jul 06 23:58:28.256 INFO Fetch successful Jul 6 23:58:28.256439 coreos-metadata[1939]: Jul 06 23:58:28.256 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 6 23:58:28.259921 coreos-metadata[1939]: Jul 06 23:58:28.259 INFO Fetch successful Jul 6 23:58:28.259921 coreos-metadata[1939]: Jul 06 23:58:28.259 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 6 23:58:28.260444 coreos-metadata[1939]: Jul 06 23:58:28.260 INFO Fetch successful Jul 6 23:58:28.262955 systemd-logind[1952]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:58:28.263253 systemd-logind[1952]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 6 23:58:28.263279 systemd-logind[1952]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:58:28.264155 systemd-logind[1952]: New seat seat0. Jul 6 23:58:28.265785 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:58:28.289071 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 6 23:58:28.308601 extend-filesystems[1992]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 6 23:58:28.308601 extend-filesystems[1992]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:58:28.308601 extend-filesystems[1992]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 6 23:58:28.312673 extend-filesystems[1942]: Resized filesystem in /dev/nvme0n1p9 Jul 6 23:58:28.312528 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:58:28.318704 bash[2018]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:58:28.312784 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:58:28.321214 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:58:28.340233 systemd[1]: Starting sshkeys.service... Jul 6 23:58:28.369087 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:58:28.381522 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:58:28.389388 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1823) Jul 6 23:58:28.400566 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:58:28.401946 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:58:28.542906 dbus-daemon[1940]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 6 23:58:28.543310 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 6 23:58:28.549326 dbus-daemon[1940]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2000 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 6 23:58:28.562012 systemd[1]: Starting polkit.service - Authorization Manager... Jul 6 23:58:28.666190 polkitd[2055]: Started polkitd version 121 Jul 6 23:58:28.696015 polkitd[2055]: Loading rules from directory /etc/polkit-1/rules.d Jul 6 23:58:28.706827 polkitd[2055]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 6 23:58:28.719392 polkitd[2055]: Finished loading, compiling and executing 2 rules Jul 6 23:58:28.724473 dbus-daemon[1940]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 6 23:58:28.726154 systemd[1]: Started polkit.service - Authorization Manager. Jul 6 23:58:28.735430 polkitd[2055]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 6 23:58:28.743058 coreos-metadata[2029]: Jul 06 23:58:28.742 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:58:28.760060 coreos-metadata[2029]: Jul 06 23:58:28.751 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 6 23:58:28.761566 coreos-metadata[2029]: Jul 06 23:58:28.761 INFO Fetch successful Jul 6 23:58:28.761835 coreos-metadata[2029]: Jul 06 23:58:28.761 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 6 23:58:28.764115 coreos-metadata[2029]: Jul 06 23:58:28.763 INFO Fetch successful Jul 6 23:58:28.771531 unknown[2029]: wrote ssh authorized keys file for user: core Jul 6 23:58:28.842257 systemd-networkd[1816]: eth0: Gained IPv6LL Jul 6 23:58:28.854564 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:58:28.855996 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:58:28.870505 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 6 23:58:28.880408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:58:28.885281 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:58:28.907161 systemd-hostnamed[2000]: Hostname set to (transient) Jul 6 23:58:28.907285 systemd-resolved[1770]: System hostname changed to 'ip-172-31-21-95'. Jul 6 23:58:28.936233 update-ssh-keys[2127]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:58:28.935556 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:58:28.943773 systemd[1]: Finished sshkeys.service. Jul 6 23:58:28.952226 locksmithd[2003]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:58:29.007469 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:58:29.056876 amazon-ssm-agent[2135]: Initializing new seelog logger Jul 6 23:58:29.060577 amazon-ssm-agent[2135]: New Seelog Logger Creation Complete Jul 6 23:58:29.060577 amazon-ssm-agent[2135]: 2025/07/06 23:58:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:58:29.060577 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:58:29.063324 amazon-ssm-agent[2135]: 2025/07/06 23:58:29 processing appconfig overrides Jul 6 23:58:29.065434 amazon-ssm-agent[2135]: 2025/07/06 23:58:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:58:29.065434 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:58:29.065434 amazon-ssm-agent[2135]: 2025/07/06 23:58:29 processing appconfig overrides Jul 6 23:58:29.066840 amazon-ssm-agent[2135]: 2025/07/06 23:58:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:58:29.066840 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:58:29.066840 amazon-ssm-agent[2135]: 2025/07/06 23:58:29 processing appconfig overrides Jul 6 23:58:29.072679 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO Proxy environment variables: Jul 6 23:58:29.078400 amazon-ssm-agent[2135]: 2025/07/06 23:58:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:58:29.080061 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:58:29.080061 amazon-ssm-agent[2135]: 2025/07/06 23:58:29 processing appconfig overrides Jul 6 23:58:29.160666 containerd[1975]: time="2025-07-06T23:58:29.158704694Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:58:29.176316 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO https_proxy: Jul 6 23:58:29.275472 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO http_proxy: Jul 6 23:58:29.299206 containerd[1975]: time="2025-07-06T23:58:29.299103648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:58:29.304739 containerd[1975]: time="2025-07-06T23:58:29.304567958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:58:29.304739 containerd[1975]: time="2025-07-06T23:58:29.304625224Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:58:29.304739 containerd[1975]: time="2025-07-06T23:58:29.304656290Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:58:29.304946 containerd[1975]: time="2025-07-06T23:58:29.304846793Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:58:29.304946 containerd[1975]: time="2025-07-06T23:58:29.304870082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:58:29.305019 containerd[1975]: time="2025-07-06T23:58:29.304944056Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:58:29.305019 containerd[1975]: time="2025-07-06T23:58:29.304962516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:58:29.307050 containerd[1975]: time="2025-07-06T23:58:29.305209362Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:58:29.307050 containerd[1975]: time="2025-07-06T23:58:29.305238628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:58:29.307050 containerd[1975]: time="2025-07-06T23:58:29.305260770Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:58:29.307050 containerd[1975]: time="2025-07-06T23:58:29.305276178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:58:29.307050 containerd[1975]: time="2025-07-06T23:58:29.305376881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:58:29.307050 containerd[1975]: time="2025-07-06T23:58:29.305620878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:58:29.307050 containerd[1975]: time="2025-07-06T23:58:29.305792297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:58:29.307050 containerd[1975]: time="2025-07-06T23:58:29.305812292Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:58:29.307050 containerd[1975]: time="2025-07-06T23:58:29.305907242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:58:29.307050 containerd[1975]: time="2025-07-06T23:58:29.305963668Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:58:29.316202 containerd[1975]: time="2025-07-06T23:58:29.316110632Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:58:29.316202 containerd[1975]: time="2025-07-06T23:58:29.316191053Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:58:29.316354 containerd[1975]: time="2025-07-06T23:58:29.316220787Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:58:29.316354 containerd[1975]: time="2025-07-06T23:58:29.316241758Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:58:29.316354 containerd[1975]: time="2025-07-06T23:58:29.316267234Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:58:29.316470 containerd[1975]: time="2025-07-06T23:58:29.316447912Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:58:29.316835 containerd[1975]: time="2025-07-06T23:58:29.316810317Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:58:29.317048 containerd[1975]: time="2025-07-06T23:58:29.316950191Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:58:29.317048 containerd[1975]: time="2025-07-06T23:58:29.316976841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:58:29.317048 containerd[1975]: time="2025-07-06T23:58:29.316996953Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:58:29.317048 containerd[1975]: time="2025-07-06T23:58:29.317019926Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319653232Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319691593Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319732681Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319758036Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319778982Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319813816Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319832908Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319880040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319900894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319919447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319955486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.319975065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.320000310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320240 containerd[1975]: time="2025-07-06T23:58:29.320042927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320064949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320084798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320126183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320145096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320164599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320198079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320221024Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320266370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320283860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320298671Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320382449Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320479250Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:58:29.320795 containerd[1975]: time="2025-07-06T23:58:29.320498715Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:58:29.322956 containerd[1975]: time="2025-07-06T23:58:29.320517686Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:58:29.322956 containerd[1975]: time="2025-07-06T23:58:29.320533315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.322956 containerd[1975]: time="2025-07-06T23:58:29.321200318Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:58:29.322956 containerd[1975]: time="2025-07-06T23:58:29.321234150Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:58:29.322956 containerd[1975]: time="2025-07-06T23:58:29.321251316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:58:29.325062 containerd[1975]: time="2025-07-06T23:58:29.324239215Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:58:29.325062 containerd[1975]: time="2025-07-06T23:58:29.324372938Z" level=info msg="Connect containerd service" Jul 6 23:58:29.325062 containerd[1975]: time="2025-07-06T23:58:29.324428802Z" level=info msg="using legacy CRI server" Jul 6 23:58:29.325062 containerd[1975]: time="2025-07-06T23:58:29.324455308Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:58:29.326422 containerd[1975]: time="2025-07-06T23:58:29.325570037Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:58:29.328052 containerd[1975]: time="2025-07-06T23:58:29.327640260Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:58:29.329303 containerd[1975]: time="2025-07-06T23:58:29.328163601Z" level=info msg="Start subscribing containerd event" Jul 6 23:58:29.329303 containerd[1975]: time="2025-07-06T23:58:29.328244228Z" level=info msg="Start recovering state" Jul 6 23:58:29.329303 containerd[1975]: time="2025-07-06T23:58:29.328324390Z" level=info msg="Start event monitor" Jul 6 23:58:29.329303 containerd[1975]: time="2025-07-06T23:58:29.328347125Z" level=info msg="Start snapshots syncer" Jul 6 23:58:29.329303 containerd[1975]: time="2025-07-06T23:58:29.328360436Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:58:29.329303 containerd[1975]: time="2025-07-06T23:58:29.328372852Z" level=info msg="Start streaming server" Jul 6 23:58:29.337100 containerd[1975]: time="2025-07-06T23:58:29.336488021Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:58:29.337100 containerd[1975]: time="2025-07-06T23:58:29.336571308Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:58:29.336803 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:58:29.347740 containerd[1975]: time="2025-07-06T23:58:29.346958321Z" level=info msg="containerd successfully booted in 0.192386s" Jul 6 23:58:29.375280 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO no_proxy: Jul 6 23:58:29.474211 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO Checking if agent identity type OnPrem can be assumed Jul 6 23:58:29.574045 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO Checking if agent identity type EC2 can be assumed Jul 6 23:58:29.606761 sshd_keygen[1979]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:58:29.671662 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO Agent will take identity from EC2 Jul 6 23:58:29.684971 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:58:29.698529 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:58:29.708394 systemd[1]: Started sshd@0-172.31.21.95:22-147.75.109.163:52694.service - OpenSSH per-connection server daemon (147.75.109.163:52694). Jul 6 23:58:29.722396 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:58:29.722624 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:58:29.733168 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:58:29.773522 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:58:29.777751 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:58:29.786540 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:58:29.797063 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:58:29.797063 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:58:29.797063 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 6 23:58:29.797254 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 6 23:58:29.797254 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [amazon-ssm-agent] Starting Core Agent Jul 6 23:58:29.797254 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 6 23:58:29.797254 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [Registrar] Starting registrar module Jul 6 23:58:29.797254 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 6 23:58:29.797254 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [EC2Identity] EC2 registration was successful. Jul 6 23:58:29.797254 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [CredentialRefresher] credentialRefresher has started Jul 6 23:58:29.797254 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [CredentialRefresher] Starting credentials refresher loop Jul 6 23:58:29.797254 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 6 23:58:29.797482 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:58:29.798440 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:58:29.872092 amazon-ssm-agent[2135]: 2025-07-06 23:58:29 INFO [CredentialRefresher] Next credential rotation will be in 32.0833270533 minutes Jul 6 23:58:29.912804 tar[1959]: linux-amd64/README.md Jul 6 23:58:29.927843 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:58:29.960195 sshd[2177]: Accepted publickey for core from 147.75.109.163 port 52694 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:58:29.962224 sshd[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:29.974749 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:58:29.982809 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:58:29.987100 systemd-logind[1952]: New session 1 of user core. Jul 6 23:58:30.003518 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:58:30.012483 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:58:30.027908 (systemd)[2191]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:58:30.180543 systemd[2191]: Queued start job for default target default.target. Jul 6 23:58:30.189137 systemd[2191]: Created slice app.slice - User Application Slice. Jul 6 23:58:30.189170 systemd[2191]: Reached target paths.target - Paths. Jul 6 23:58:30.189185 systemd[2191]: Reached target timers.target - Timers. Jul 6 23:58:30.193123 systemd[2191]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:58:30.212240 systemd[2191]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:58:30.212413 systemd[2191]: Reached target sockets.target - Sockets. Jul 6 23:58:30.212442 systemd[2191]: Reached target basic.target - Basic System. Jul 6 23:58:30.212505 systemd[2191]: Reached target default.target - Main User Target. Jul 6 23:58:30.212546 systemd[2191]: Startup finished in 177ms. Jul 6 23:58:30.212833 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:58:30.219269 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:58:30.374553 systemd[1]: Started sshd@1-172.31.21.95:22-147.75.109.163:56180.service - OpenSSH per-connection server daemon (147.75.109.163:56180). Jul 6 23:58:30.530813 sshd[2202]: Accepted publickey for core from 147.75.109.163 port 56180 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:58:30.531854 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:30.539403 systemd-logind[1952]: New session 2 of user core. Jul 6 23:58:30.551320 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:58:30.671153 sshd[2202]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:30.677164 systemd[1]: sshd@1-172.31.21.95:22-147.75.109.163:56180.service: Deactivated successfully. Jul 6 23:58:30.679719 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:58:30.680856 systemd-logind[1952]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:58:30.682275 systemd-logind[1952]: Removed session 2. Jul 6 23:58:30.718397 systemd[1]: Started sshd@2-172.31.21.95:22-147.75.109.163:56182.service - OpenSSH per-connection server daemon (147.75.109.163:56182). Jul 6 23:58:30.733586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:58:30.735907 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:58:30.738138 systemd[1]: Startup finished in 584ms (kernel) + 6.609s (initrd) + 6.472s (userspace) = 13.667s. Jul 6 23:58:30.744115 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:58:30.811504 amazon-ssm-agent[2135]: 2025-07-06 23:58:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 6 23:58:30.879812 sshd[2211]: Accepted publickey for core from 147.75.109.163 port 56182 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:58:30.881415 sshd[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:30.886902 systemd-logind[1952]: New session 3 of user core. Jul 6 23:58:30.892668 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:58:30.911868 amazon-ssm-agent[2135]: 2025-07-06 23:58:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2222) started Jul 6 23:58:31.013017 amazon-ssm-agent[2135]: 2025-07-06 23:58:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 6 23:58:31.024695 sshd[2211]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:31.029617 systemd[1]: sshd@2-172.31.21.95:22-147.75.109.163:56182.service: Deactivated successfully. Jul 6 23:58:31.031857 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:58:31.032727 systemd-logind[1952]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:58:31.033743 systemd-logind[1952]: Removed session 3. Jul 6 23:58:31.069200 ntpd[1944]: Listen normally on 6 eth0 [fe80::415:34ff:fe33:2b51%2]:123 Jul 6 23:58:31.069888 ntpd[1944]: 6 Jul 23:58:31 ntpd[1944]: Listen normally on 6 eth0 [fe80::415:34ff:fe33:2b51%2]:123 Jul 6 23:58:31.566053 kubelet[2215]: E0706 23:58:31.566004 2215 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:58:31.568863 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:58:31.569029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:58:31.569326 systemd[1]: kubelet.service: Consumed 1.134s CPU time. Jul 6 23:58:36.008210 systemd-resolved[1770]: Clock change detected. Flushing caches. Jul 6 23:58:41.993898 systemd[1]: Started sshd@3-172.31.21.95:22-147.75.109.163:43130.service - OpenSSH per-connection server daemon (147.75.109.163:43130). Jul 6 23:58:42.167485 sshd[2244]: Accepted publickey for core from 147.75.109.163 port 43130 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:58:42.169214 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:42.174145 systemd-logind[1952]: New session 4 of user core. Jul 6 23:58:42.181155 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:58:42.304449 sshd[2244]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:42.307898 systemd[1]: sshd@3-172.31.21.95:22-147.75.109.163:43130.service: Deactivated successfully. Jul 6 23:58:42.309699 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:58:42.311465 systemd-logind[1952]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:58:42.313124 systemd-logind[1952]: Removed session 4. Jul 6 23:58:42.333856 systemd[1]: Started sshd@4-172.31.21.95:22-147.75.109.163:43146.service - OpenSSH per-connection server daemon (147.75.109.163:43146). Jul 6 23:58:42.492258 sshd[2251]: Accepted publickey for core from 147.75.109.163 port 43146 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:58:42.493625 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:42.498323 systemd-logind[1952]: New session 5 of user core. Jul 6 23:58:42.505119 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:58:42.508609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:58:42.526477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:58:42.637547 sshd[2251]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:42.642504 systemd[1]: sshd@4-172.31.21.95:22-147.75.109.163:43146.service: Deactivated successfully. Jul 6 23:58:42.644690 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:58:42.645591 systemd-logind[1952]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:58:42.647039 systemd-logind[1952]: Removed session 5. Jul 6 23:58:42.668396 systemd[1]: Started sshd@5-172.31.21.95:22-147.75.109.163:43158.service - OpenSSH per-connection server daemon (147.75.109.163:43158). Jul 6 23:58:42.736971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:58:42.758344 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:58:42.807423 kubelet[2268]: E0706 23:58:42.807362 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:58:42.812129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:58:42.812328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:58:42.834115 sshd[2261]: Accepted publickey for core from 147.75.109.163 port 43158 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:58:42.835687 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:42.841230 systemd-logind[1952]: New session 6 of user core. Jul 6 23:58:42.851106 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:58:42.968693 sshd[2261]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:42.972256 systemd-logind[1952]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:58:42.973213 systemd[1]: sshd@5-172.31.21.95:22-147.75.109.163:43158.service: Deactivated successfully. Jul 6 23:58:42.975362 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:58:42.976183 systemd-logind[1952]: Removed session 6. Jul 6 23:58:43.006287 systemd[1]: Started sshd@6-172.31.21.95:22-147.75.109.163:43162.service - OpenSSH per-connection server daemon (147.75.109.163:43162). Jul 6 23:58:43.167233 sshd[2281]: Accepted publickey for core from 147.75.109.163 port 43162 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:58:43.168703 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:43.172974 systemd-logind[1952]: New session 7 of user core. Jul 6 23:58:43.177061 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:58:43.304621 sudo[2284]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:58:43.305262 sudo[2284]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:58:43.321646 sudo[2284]: pam_unix(sudo:session): session closed for user root Jul 6 23:58:43.346008 sshd[2281]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:43.349217 systemd[1]: sshd@6-172.31.21.95:22-147.75.109.163:43162.service: Deactivated successfully. Jul 6 23:58:43.351476 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:58:43.353180 systemd-logind[1952]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:58:43.354541 systemd-logind[1952]: Removed session 7. Jul 6 23:58:43.379231 systemd[1]: Started sshd@7-172.31.21.95:22-147.75.109.163:43176.service - OpenSSH per-connection server daemon (147.75.109.163:43176). Jul 6 23:58:43.536999 sshd[2289]: Accepted publickey for core from 147.75.109.163 port 43176 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:58:43.538926 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:43.544367 systemd-logind[1952]: New session 8 of user core. Jul 6 23:58:43.551141 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:58:43.650778 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:58:43.651283 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:58:43.655672 sudo[2293]: pam_unix(sudo:session): session closed for user root Jul 6 23:58:43.661510 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:58:43.662024 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:58:43.674266 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:58:43.679428 auditctl[2296]: No rules Jul 6 23:58:43.680000 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:58:43.680227 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:58:43.683116 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:58:43.715124 augenrules[2314]: No rules Jul 6 23:58:43.716626 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:58:43.718281 sudo[2292]: pam_unix(sudo:session): session closed for user root Jul 6 23:58:43.741842 sshd[2289]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:43.744618 systemd[1]: sshd@7-172.31.21.95:22-147.75.109.163:43176.service: Deactivated successfully. Jul 6 23:58:43.746738 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:58:43.748388 systemd-logind[1952]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:58:43.749484 systemd-logind[1952]: Removed session 8. Jul 6 23:58:43.779271 systemd[1]: Started sshd@8-172.31.21.95:22-147.75.109.163:43186.service - OpenSSH per-connection server daemon (147.75.109.163:43186). Jul 6 23:58:43.937831 sshd[2322]: Accepted publickey for core from 147.75.109.163 port 43186 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:58:43.939355 sshd[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:43.944734 systemd-logind[1952]: New session 9 of user core. Jul 6 23:58:43.950157 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:58:44.051616 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:58:44.052035 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:58:44.631272 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:58:44.631958 (dockerd)[2341]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:58:45.309164 dockerd[2341]: time="2025-07-06T23:58:45.309093583Z" level=info msg="Starting up" Jul 6 23:58:45.498714 systemd[1]: var-lib-docker-metacopy\x2dcheck757276269-merged.mount: Deactivated successfully. Jul 6 23:58:45.518398 dockerd[2341]: time="2025-07-06T23:58:45.518340461Z" level=info msg="Loading containers: start." Jul 6 23:58:45.648007 kernel: Initializing XFRM netlink socket Jul 6 23:58:45.677779 (udev-worker)[2364]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:58:45.736744 systemd-networkd[1816]: docker0: Link UP Jul 6 23:58:45.756729 dockerd[2341]: time="2025-07-06T23:58:45.756680170Z" level=info msg="Loading containers: done." Jul 6 23:58:45.785407 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3909923829-merged.mount: Deactivated successfully. Jul 6 23:58:45.791049 dockerd[2341]: time="2025-07-06T23:58:45.791002224Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:58:45.791205 dockerd[2341]: time="2025-07-06T23:58:45.791110881Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:58:45.791244 dockerd[2341]: time="2025-07-06T23:58:45.791218468Z" level=info msg="Daemon has completed initialization" Jul 6 23:58:45.823208 dockerd[2341]: time="2025-07-06T23:58:45.822907469Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:58:45.823027 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:58:46.666789 containerd[1975]: time="2025-07-06T23:58:46.666751557Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:58:47.210074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146961296.mount: Deactivated successfully. Jul 6 23:58:48.557257 containerd[1975]: time="2025-07-06T23:58:48.557206990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:48.558508 containerd[1975]: time="2025-07-06T23:58:48.558455519Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 6 23:58:48.559806 containerd[1975]: time="2025-07-06T23:58:48.559740576Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:48.562698 containerd[1975]: time="2025-07-06T23:58:48.562644589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:48.564075 containerd[1975]: time="2025-07-06T23:58:48.563850463Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.897057383s" Jul 6 23:58:48.564075 containerd[1975]: time="2025-07-06T23:58:48.563909194Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 6 23:58:48.565064 containerd[1975]: time="2025-07-06T23:58:48.565036119Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:58:50.029543 containerd[1975]: time="2025-07-06T23:58:50.029482399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:50.031143 containerd[1975]: time="2025-07-06T23:58:50.031087723Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 6 23:58:50.034003 containerd[1975]: time="2025-07-06T23:58:50.032146927Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:50.036496 containerd[1975]: time="2025-07-06T23:58:50.036450677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:50.037533 containerd[1975]: time="2025-07-06T23:58:50.037498090Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.472431924s" Jul 6 23:58:50.037610 containerd[1975]: time="2025-07-06T23:58:50.037537561Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 6 23:58:50.038188 containerd[1975]: time="2025-07-06T23:58:50.038166223Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:58:51.322342 containerd[1975]: time="2025-07-06T23:58:51.322285493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:51.323460 containerd[1975]: time="2025-07-06T23:58:51.323292095Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 6 23:58:51.324408 containerd[1975]: time="2025-07-06T23:58:51.324358611Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:51.327208 containerd[1975]: time="2025-07-06T23:58:51.327157249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:51.328432 containerd[1975]: time="2025-07-06T23:58:51.328277708Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.290081807s" Jul 6 23:58:51.328432 containerd[1975]: time="2025-07-06T23:58:51.328314063Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 6 23:58:51.328953 containerd[1975]: time="2025-07-06T23:58:51.328839659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:58:52.369503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198248650.mount: Deactivated successfully. Jul 6 23:58:52.922707 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:58:52.928453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:58:52.956272 containerd[1975]: time="2025-07-06T23:58:52.956215511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:52.961233 containerd[1975]: time="2025-07-06T23:58:52.960937575Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 6 23:58:52.963378 containerd[1975]: time="2025-07-06T23:58:52.962503234Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:52.968680 containerd[1975]: time="2025-07-06T23:58:52.968617648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:52.970897 containerd[1975]: time="2025-07-06T23:58:52.970627587Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.641511075s" Jul 6 23:58:52.970897 containerd[1975]: time="2025-07-06T23:58:52.970663931Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 6 23:58:52.972150 containerd[1975]: time="2025-07-06T23:58:52.972129786Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:58:53.187639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:58:53.193073 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:58:53.235133 kubelet[2557]: E0706 23:58:53.235071 2557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:58:53.237783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:58:53.238009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:58:53.567283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount615299644.mount: Deactivated successfully. Jul 6 23:58:54.551805 containerd[1975]: time="2025-07-06T23:58:54.551748643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:54.553496 containerd[1975]: time="2025-07-06T23:58:54.553430689Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 6 23:58:54.554104 containerd[1975]: time="2025-07-06T23:58:54.554049916Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:54.557851 containerd[1975]: time="2025-07-06T23:58:54.557811167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:54.561617 containerd[1975]: time="2025-07-06T23:58:54.560849645Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.588413821s" Jul 6 23:58:54.561617 containerd[1975]: time="2025-07-06T23:58:54.560910139Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 6 23:58:54.562681 containerd[1975]: time="2025-07-06T23:58:54.562624017Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:58:55.015265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3103566227.mount: Deactivated successfully. Jul 6 23:58:55.019037 containerd[1975]: time="2025-07-06T23:58:55.018988881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:55.020003 containerd[1975]: time="2025-07-06T23:58:55.019892244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:58:55.021903 containerd[1975]: time="2025-07-06T23:58:55.020937843Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:55.023251 containerd[1975]: time="2025-07-06T23:58:55.023197434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:55.024571 containerd[1975]: time="2025-07-06T23:58:55.023781215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 461.119191ms" Jul 6 23:58:55.024571 containerd[1975]: time="2025-07-06T23:58:55.023810617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:58:55.024718 containerd[1975]: time="2025-07-06T23:58:55.024698684Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:58:55.493172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076258942.mount: Deactivated successfully. Jul 6 23:58:57.948272 containerd[1975]: time="2025-07-06T23:58:57.948210193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:57.950491 containerd[1975]: time="2025-07-06T23:58:57.950254100Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 6 23:58:57.951545 containerd[1975]: time="2025-07-06T23:58:57.951486934Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:57.954387 containerd[1975]: time="2025-07-06T23:58:57.954356854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:57.955858 containerd[1975]: time="2025-07-06T23:58:57.955562895Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.930841207s" Jul 6 23:58:57.955858 containerd[1975]: time="2025-07-06T23:58:57.955601737Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 6 23:58:59.881797 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 6 23:59:02.575150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:02.585283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:02.687241 systemd[1]: Reloading requested from client PID 2706 ('systemctl') (unit session-9.scope)... Jul 6 23:59:02.687260 systemd[1]: Reloading... Jul 6 23:59:02.846970 zram_generator::config[2746]: No configuration found. Jul 6 23:59:03.076831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:59:03.222813 systemd[1]: Reloading finished in 534 ms. Jul 6 23:59:03.290165 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:59:03.290311 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:59:03.290732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:03.297275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:03.569822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:03.581364 (kubelet)[2808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:59:03.660505 kubelet[2808]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:59:03.660505 kubelet[2808]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:59:03.660505 kubelet[2808]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:59:03.668900 kubelet[2808]: I0706 23:59:03.667063 2808 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:59:04.480269 kubelet[2808]: I0706 23:59:04.480207 2808 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:59:04.480269 kubelet[2808]: I0706 23:59:04.480254 2808 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:59:04.480587 kubelet[2808]: I0706 23:59:04.480565 2808 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:59:04.550935 kubelet[2808]: I0706 23:59:04.550760 2808 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:59:04.557195 kubelet[2808]: E0706 23:59:04.557134 2808 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.21.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:59:04.588593 kubelet[2808]: E0706 23:59:04.588503 2808 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:59:04.588593 kubelet[2808]: I0706 23:59:04.588572 2808 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:59:04.594194 kubelet[2808]: I0706 23:59:04.594143 2808 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:59:04.598264 kubelet[2808]: I0706 23:59:04.598168 2808 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:59:04.602847 kubelet[2808]: I0706 23:59:04.598270 2808 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:59:04.604909 kubelet[2808]: I0706 23:59:04.604843 2808 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:59:04.604909 kubelet[2808]: I0706 23:59:04.604905 2808 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:59:04.605132 kubelet[2808]: I0706 23:59:04.605062 2808 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:59:04.611183 kubelet[2808]: I0706 23:59:04.611137 2808 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:59:04.611183 kubelet[2808]: I0706 23:59:04.611172 2808 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:59:04.611183 kubelet[2808]: I0706 23:59:04.611198 2808 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:59:04.613926 kubelet[2808]: I0706 23:59:04.613357 2808 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:59:04.619292 kubelet[2808]: E0706 23:59:04.619239 2808 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-95&limit=500&resourceVersion=0\": dial tcp 172.31.21.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:59:04.626171 kubelet[2808]: E0706 23:59:04.626130 2808 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:59:04.626882 kubelet[2808]: I0706 23:59:04.626842 2808 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:59:04.627900 kubelet[2808]: I0706 23:59:04.627693 2808 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:59:04.629896 kubelet[2808]: W0706 23:59:04.628723 2808 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:59:04.636218 kubelet[2808]: I0706 23:59:04.636185 2808 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:59:04.636338 kubelet[2808]: I0706 23:59:04.636258 2808 server.go:1289] "Started kubelet" Jul 6 23:59:04.638428 kubelet[2808]: I0706 23:59:04.638364 2808 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:59:04.641896 kubelet[2808]: I0706 23:59:04.641127 2808 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:59:04.641896 kubelet[2808]: I0706 23:59:04.641128 2808 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:59:04.641896 kubelet[2808]: I0706 23:59:04.641748 2808 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:59:04.644495 kubelet[2808]: I0706 23:59:04.643822 2808 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:59:04.656273 kubelet[2808]: E0706 23:59:04.645703 2808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.95:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.95:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-95.184fcef8011bae76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-95,UID:ip-172-31-21-95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-95,},FirstTimestamp:2025-07-06 23:59:04.636214902 +0000 UTC m=+1.049786565,LastTimestamp:2025-07-06 23:59:04.636214902 +0000 UTC m=+1.049786565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-95,}" Jul 6 23:59:04.656273 kubelet[2808]: I0706 23:59:04.651715 2808 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:59:04.656273 kubelet[2808]: E0706 23:59:04.655662 2808 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-95\" not found" Jul 6 23:59:04.656273 kubelet[2808]: I0706 23:59:04.655713 2808 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:59:04.656273 kubelet[2808]: I0706 23:59:04.656026 2808 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:59:04.656273 kubelet[2808]: I0706 23:59:04.656119 2808 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:59:04.656672 kubelet[2808]: E0706 23:59:04.656584 2808 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:59:04.660131 kubelet[2808]: E0706 23:59:04.659786 2808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-95?timeout=10s\": dial tcp 172.31.21.95:6443: connect: connection refused" interval="200ms" Jul 6 23:59:04.663558 kubelet[2808]: I0706 23:59:04.663526 2808 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:59:04.667061 kubelet[2808]: I0706 23:59:04.666481 2808 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:59:04.675102 kubelet[2808]: I0706 23:59:04.674888 2808 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:59:04.687087 kubelet[2808]: E0706 23:59:04.687054 2808 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:59:04.692647 kubelet[2808]: I0706 23:59:04.692598 2808 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:59:04.696071 kubelet[2808]: I0706 23:59:04.695795 2808 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:59:04.696071 kubelet[2808]: I0706 23:59:04.695827 2808 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:59:04.696071 kubelet[2808]: I0706 23:59:04.695852 2808 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:59:04.696071 kubelet[2808]: I0706 23:59:04.695860 2808 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:59:04.696071 kubelet[2808]: E0706 23:59:04.695927 2808 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:59:04.704919 kubelet[2808]: E0706 23:59:04.703751 2808 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:59:04.714120 kubelet[2808]: I0706 23:59:04.714091 2808 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:59:04.714702 kubelet[2808]: I0706 23:59:04.714685 2808 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:59:04.714824 kubelet[2808]: I0706 23:59:04.714817 2808 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:59:04.720654 kubelet[2808]: I0706 23:59:04.720600 2808 policy_none.go:49] "None policy: Start" Jul 6 23:59:04.720654 kubelet[2808]: I0706 23:59:04.720652 2808 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:59:04.720850 kubelet[2808]: I0706 23:59:04.720681 2808 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:59:04.729768 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:59:04.739720 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:59:04.745914 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:59:04.755855 kubelet[2808]: E0706 23:59:04.755803 2808 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-95\" not found" Jul 6 23:59:04.760020 kubelet[2808]: E0706 23:59:04.759359 2808 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:59:04.760020 kubelet[2808]: I0706 23:59:04.759566 2808 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:59:04.760020 kubelet[2808]: I0706 23:59:04.759577 2808 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:59:04.760020 kubelet[2808]: I0706 23:59:04.759940 2808 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:59:04.761851 kubelet[2808]: E0706 23:59:04.761124 2808 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:59:04.761851 kubelet[2808]: E0706 23:59:04.761174 2808 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-95\" not found" Jul 6 23:59:04.828648 systemd[1]: Created slice kubepods-burstable-pod2534a14a83c717fb2ad54dff98187ed5.slice - libcontainer container kubepods-burstable-pod2534a14a83c717fb2ad54dff98187ed5.slice. Jul 6 23:59:04.852021 kubelet[2808]: E0706 23:59:04.851990 2808 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-95\" not found" node="ip-172-31-21-95" Jul 6 23:59:04.855407 systemd[1]: Created slice kubepods-burstable-pod34ac805e62b370e290b5766c03007555.slice - libcontainer container kubepods-burstable-pod34ac805e62b370e290b5766c03007555.slice. Jul 6 23:59:04.857414 kubelet[2808]: I0706 23:59:04.857368 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34ac805e62b370e290b5766c03007555-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-95\" (UID: \"34ac805e62b370e290b5766c03007555\") " pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:04.857783 kubelet[2808]: I0706 23:59:04.857414 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33bb6f7f9087b1ad1530608da711ea09-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-95\" (UID: \"33bb6f7f9087b1ad1530608da711ea09\") " pod="kube-system/kube-scheduler-ip-172-31-21-95" Jul 6 23:59:04.857783 kubelet[2808]: I0706 23:59:04.857449 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2534a14a83c717fb2ad54dff98187ed5-ca-certs\") pod \"kube-apiserver-ip-172-31-21-95\" (UID: \"2534a14a83c717fb2ad54dff98187ed5\") " pod="kube-system/kube-apiserver-ip-172-31-21-95" Jul 6 23:59:04.857783 kubelet[2808]: I0706 23:59:04.857470 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34ac805e62b370e290b5766c03007555-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-95\" (UID: \"34ac805e62b370e290b5766c03007555\") " pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:04.857783 kubelet[2808]: I0706 23:59:04.857495 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34ac805e62b370e290b5766c03007555-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-95\" (UID: \"34ac805e62b370e290b5766c03007555\") " pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:04.857783 kubelet[2808]: I0706 23:59:04.857521 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2534a14a83c717fb2ad54dff98187ed5-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-95\" (UID: \"2534a14a83c717fb2ad54dff98187ed5\") " pod="kube-system/kube-apiserver-ip-172-31-21-95" Jul 6 23:59:04.857992 kubelet[2808]: I0706 23:59:04.857542 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2534a14a83c717fb2ad54dff98187ed5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-95\" (UID: \"2534a14a83c717fb2ad54dff98187ed5\") " pod="kube-system/kube-apiserver-ip-172-31-21-95" Jul 6 23:59:04.857992 kubelet[2808]: I0706 23:59:04.857586 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34ac805e62b370e290b5766c03007555-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-95\" (UID: \"34ac805e62b370e290b5766c03007555\") " pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:04.857992 kubelet[2808]: I0706 23:59:04.857611 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34ac805e62b370e290b5766c03007555-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-95\" (UID: \"34ac805e62b370e290b5766c03007555\") " pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:04.858604 kubelet[2808]: E0706 23:59:04.858574 2808 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-95\" not found" node="ip-172-31-21-95" Jul 6 23:59:04.861612 kubelet[2808]: E0706 23:59:04.861232 2808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-95?timeout=10s\": dial tcp 172.31.21.95:6443: connect: connection refused" interval="400ms" Jul 6 23:59:04.862782 systemd[1]: Created slice kubepods-burstable-pod33bb6f7f9087b1ad1530608da711ea09.slice - libcontainer container kubepods-burstable-pod33bb6f7f9087b1ad1530608da711ea09.slice. Jul 6 23:59:04.864060 kubelet[2808]: I0706 23:59:04.864025 2808 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-95" Jul 6 23:59:04.864906 kubelet[2808]: E0706 23:59:04.864426 2808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.95:6443/api/v1/nodes\": dial tcp 172.31.21.95:6443: connect: connection refused" node="ip-172-31-21-95" Jul 6 23:59:04.865169 kubelet[2808]: E0706 23:59:04.865143 2808 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-95\" not found" node="ip-172-31-21-95" Jul 6 23:59:05.066738 kubelet[2808]: I0706 23:59:05.066625 2808 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-95" Jul 6 23:59:05.068232 kubelet[2808]: E0706 23:59:05.068193 2808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.95:6443/api/v1/nodes\": dial tcp 172.31.21.95:6443: connect: connection refused" node="ip-172-31-21-95" Jul 6 23:59:05.154153 containerd[1975]: time="2025-07-06T23:59:05.153996314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-95,Uid:2534a14a83c717fb2ad54dff98187ed5,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:05.160078 containerd[1975]: time="2025-07-06T23:59:05.160032012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-95,Uid:34ac805e62b370e290b5766c03007555,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:05.166325 containerd[1975]: time="2025-07-06T23:59:05.166286537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-95,Uid:33bb6f7f9087b1ad1530608da711ea09,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:05.262615 kubelet[2808]: E0706 23:59:05.262552 2808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-95?timeout=10s\": dial tcp 172.31.21.95:6443: connect: connection refused" interval="800ms" Jul 6 23:59:05.470768 kubelet[2808]: I0706 23:59:05.470556 2808 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-95" Jul 6 23:59:05.470958 kubelet[2808]: E0706 23:59:05.470928 2808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.95:6443/api/v1/nodes\": dial tcp 172.31.21.95:6443: connect: connection refused" node="ip-172-31-21-95" Jul 6 23:59:05.574378 kubelet[2808]: E0706 23:59:05.574304 2808 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:59:05.677381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3579757583.mount: Deactivated successfully. Jul 6 23:59:05.694606 containerd[1975]: time="2025-07-06T23:59:05.694538622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:05.696761 containerd[1975]: time="2025-07-06T23:59:05.696713870Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:05.698826 containerd[1975]: time="2025-07-06T23:59:05.698763363Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:59:05.700936 containerd[1975]: time="2025-07-06T23:59:05.700880426Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:05.703117 containerd[1975]: time="2025-07-06T23:59:05.703032234Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:59:05.705466 containerd[1975]: time="2025-07-06T23:59:05.705418162Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:05.707116 containerd[1975]: time="2025-07-06T23:59:05.707046173Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:59:05.710345 containerd[1975]: time="2025-07-06T23:59:05.710299027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:05.713051 containerd[1975]: time="2025-07-06T23:59:05.711101026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 550.994565ms" Jul 6 23:59:05.713370 containerd[1975]: time="2025-07-06T23:59:05.713319879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.247536ms" Jul 6 23:59:05.716153 containerd[1975]: time="2025-07-06T23:59:05.716113894Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.757275ms" Jul 6 23:59:05.895763 kubelet[2808]: E0706 23:59:05.895652 2808 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:59:05.937903 containerd[1975]: time="2025-07-06T23:59:05.937410376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:05.939891 containerd[1975]: time="2025-07-06T23:59:05.937717315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:05.940885 containerd[1975]: time="2025-07-06T23:59:05.940203769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:05.940885 containerd[1975]: time="2025-07-06T23:59:05.940332748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:05.944908 containerd[1975]: time="2025-07-06T23:59:05.942907772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:05.944908 containerd[1975]: time="2025-07-06T23:59:05.942973127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:05.944908 containerd[1975]: time="2025-07-06T23:59:05.942999555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:05.944908 containerd[1975]: time="2025-07-06T23:59:05.943122952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:05.946936 containerd[1975]: time="2025-07-06T23:59:05.945629853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:05.946936 containerd[1975]: time="2025-07-06T23:59:05.945713844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:05.946936 containerd[1975]: time="2025-07-06T23:59:05.945740026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:05.946936 containerd[1975]: time="2025-07-06T23:59:05.945854929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:05.984102 systemd[1]: Started cri-containerd-76d40783b1e6e0466a27bb21660c4753552ded53ea8b11bd370cf91c8dc46b2c.scope - libcontainer container 76d40783b1e6e0466a27bb21660c4753552ded53ea8b11bd370cf91c8dc46b2c. Jul 6 23:59:05.998285 systemd[1]: Started cri-containerd-066c820cf3fc31d2a01b35f617fe42f05ab7d4083b9604ddc58c6c5f673f2cc0.scope - libcontainer container 066c820cf3fc31d2a01b35f617fe42f05ab7d4083b9604ddc58c6c5f673f2cc0. Jul 6 23:59:06.000675 systemd[1]: Started cri-containerd-a476558a506a617c3283d244d365c6d85e86dbd265997d0a4eb89c114f841ea2.scope - libcontainer container a476558a506a617c3283d244d365c6d85e86dbd265997d0a4eb89c114f841ea2. Jul 6 23:59:06.066040 kubelet[2808]: E0706 23:59:06.063448 2808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-95?timeout=10s\": dial tcp 172.31.21.95:6443: connect: connection refused" interval="1.6s" Jul 6 23:59:06.072925 kubelet[2808]: E0706 23:59:06.070709 2808 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:59:06.074362 kubelet[2808]: E0706 23:59:06.074242 2808 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-95&limit=500&resourceVersion=0\": dial tcp 172.31.21.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:59:06.096205 containerd[1975]: time="2025-07-06T23:59:06.095470795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-95,Uid:2534a14a83c717fb2ad54dff98187ed5,Namespace:kube-system,Attempt:0,} returns sandbox id \"066c820cf3fc31d2a01b35f617fe42f05ab7d4083b9604ddc58c6c5f673f2cc0\"" Jul 6 23:59:06.099026 containerd[1975]: time="2025-07-06T23:59:06.098756279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-95,Uid:34ac805e62b370e290b5766c03007555,Namespace:kube-system,Attempt:0,} returns sandbox id \"76d40783b1e6e0466a27bb21660c4753552ded53ea8b11bd370cf91c8dc46b2c\"" Jul 6 23:59:06.108973 containerd[1975]: time="2025-07-06T23:59:06.108929319Z" level=info msg="CreateContainer within sandbox \"066c820cf3fc31d2a01b35f617fe42f05ab7d4083b9604ddc58c6c5f673f2cc0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:59:06.113859 containerd[1975]: time="2025-07-06T23:59:06.113016858Z" level=info msg="CreateContainer within sandbox \"76d40783b1e6e0466a27bb21660c4753552ded53ea8b11bd370cf91c8dc46b2c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:59:06.114894 containerd[1975]: time="2025-07-06T23:59:06.114834358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-95,Uid:33bb6f7f9087b1ad1530608da711ea09,Namespace:kube-system,Attempt:0,} returns sandbox id \"a476558a506a617c3283d244d365c6d85e86dbd265997d0a4eb89c114f841ea2\"" Jul 6 23:59:06.123179 containerd[1975]: time="2025-07-06T23:59:06.123079383Z" level=info msg="CreateContainer within sandbox \"a476558a506a617c3283d244d365c6d85e86dbd265997d0a4eb89c114f841ea2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:59:06.143327 containerd[1975]: time="2025-07-06T23:59:06.143269954Z" level=info msg="CreateContainer within sandbox \"066c820cf3fc31d2a01b35f617fe42f05ab7d4083b9604ddc58c6c5f673f2cc0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7a671fd00b9c82f48b247030f7e434c2e4988d0a8efaf7ec7d74c5e51a9b2e12\"" Jul 6 23:59:06.144097 containerd[1975]: time="2025-07-06T23:59:06.144063618Z" level=info msg="StartContainer for \"7a671fd00b9c82f48b247030f7e434c2e4988d0a8efaf7ec7d74c5e51a9b2e12\"" Jul 6 23:59:06.162058 containerd[1975]: time="2025-07-06T23:59:06.161895150Z" level=info msg="CreateContainer within sandbox \"76d40783b1e6e0466a27bb21660c4753552ded53ea8b11bd370cf91c8dc46b2c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0348eedc37240641ad754b7152e6045467f61b3495910025147efe8a2f5a898e\"" Jul 6 23:59:06.164275 containerd[1975]: time="2025-07-06T23:59:06.164232785Z" level=info msg="StartContainer for \"0348eedc37240641ad754b7152e6045467f61b3495910025147efe8a2f5a898e\"" Jul 6 23:59:06.173243 containerd[1975]: time="2025-07-06T23:59:06.173124041Z" level=info msg="CreateContainer within sandbox \"a476558a506a617c3283d244d365c6d85e86dbd265997d0a4eb89c114f841ea2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b57563c6b9d8aac39c6d410237675f12ffb3e59c350281dfa0e357d64351e8f5\"" Jul 6 23:59:06.175412 containerd[1975]: time="2025-07-06T23:59:06.174036596Z" level=info msg="StartContainer for \"b57563c6b9d8aac39c6d410237675f12ffb3e59c350281dfa0e357d64351e8f5\"" Jul 6 23:59:06.181609 systemd[1]: Started cri-containerd-7a671fd00b9c82f48b247030f7e434c2e4988d0a8efaf7ec7d74c5e51a9b2e12.scope - libcontainer container 7a671fd00b9c82f48b247030f7e434c2e4988d0a8efaf7ec7d74c5e51a9b2e12. Jul 6 23:59:06.240380 systemd[1]: Started cri-containerd-0348eedc37240641ad754b7152e6045467f61b3495910025147efe8a2f5a898e.scope - libcontainer container 0348eedc37240641ad754b7152e6045467f61b3495910025147efe8a2f5a898e. Jul 6 23:59:06.243263 systemd[1]: Started cri-containerd-b57563c6b9d8aac39c6d410237675f12ffb3e59c350281dfa0e357d64351e8f5.scope - libcontainer container b57563c6b9d8aac39c6d410237675f12ffb3e59c350281dfa0e357d64351e8f5. Jul 6 23:59:06.276265 kubelet[2808]: I0706 23:59:06.276210 2808 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-95" Jul 6 23:59:06.276902 kubelet[2808]: E0706 23:59:06.276728 2808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.95:6443/api/v1/nodes\": dial tcp 172.31.21.95:6443: connect: connection refused" node="ip-172-31-21-95" Jul 6 23:59:06.286546 containerd[1975]: time="2025-07-06T23:59:06.286498468Z" level=info msg="StartContainer for \"7a671fd00b9c82f48b247030f7e434c2e4988d0a8efaf7ec7d74c5e51a9b2e12\" returns successfully" Jul 6 23:59:06.332587 containerd[1975]: time="2025-07-06T23:59:06.332386575Z" level=info msg="StartContainer for \"0348eedc37240641ad754b7152e6045467f61b3495910025147efe8a2f5a898e\" returns successfully" Jul 6 23:59:06.332587 containerd[1975]: time="2025-07-06T23:59:06.332491060Z" level=info msg="StartContainer for \"b57563c6b9d8aac39c6d410237675f12ffb3e59c350281dfa0e357d64351e8f5\" returns successfully" Jul 6 23:59:06.585894 kubelet[2808]: E0706 23:59:06.585265 2808 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.21.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:59:06.725812 kubelet[2808]: E0706 23:59:06.725776 2808 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-95\" not found" node="ip-172-31-21-95" Jul 6 23:59:06.729832 kubelet[2808]: E0706 23:59:06.729801 2808 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-95\" not found" node="ip-172-31-21-95" Jul 6 23:59:06.733581 kubelet[2808]: E0706 23:59:06.733532 2808 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-95\" not found" node="ip-172-31-21-95" Jul 6 23:59:07.738023 kubelet[2808]: E0706 23:59:07.737990 2808 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-95\" not found" node="ip-172-31-21-95" Jul 6 23:59:07.738499 kubelet[2808]: E0706 23:59:07.738478 2808 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-95\" not found" node="ip-172-31-21-95" Jul 6 23:59:07.878573 kubelet[2808]: I0706 23:59:07.878542 2808 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-95" Jul 6 23:59:09.520299 kubelet[2808]: E0706 23:59:09.520267 2808 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-95\" not found" node="ip-172-31-21-95" Jul 6 23:59:09.801313 kubelet[2808]: E0706 23:59:09.801181 2808 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-95\" not found" node="ip-172-31-21-95" Jul 6 23:59:09.828062 kubelet[2808]: I0706 23:59:09.828023 2808 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-95" Jul 6 23:59:09.828062 kubelet[2808]: E0706 23:59:09.828071 2808 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-21-95\": node \"ip-172-31-21-95\" not found" Jul 6 23:59:09.857670 kubelet[2808]: I0706 23:59:09.857631 2808 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-95" Jul 6 23:59:09.897073 kubelet[2808]: E0706 23:59:09.896765 2808 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-95" Jul 6 23:59:09.897073 kubelet[2808]: I0706 23:59:09.896819 2808 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:09.903431 kubelet[2808]: E0706 23:59:09.903389 2808 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:09.904167 kubelet[2808]: I0706 23:59:09.903949 2808 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-95" Jul 6 23:59:09.911418 kubelet[2808]: E0706 23:59:09.911342 2808 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-95" Jul 6 23:59:10.623251 kubelet[2808]: I0706 23:59:10.623197 2808 apiserver.go:52] "Watching apiserver" Jul 6 23:59:10.657183 kubelet[2808]: I0706 23:59:10.657148 2808 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:59:12.053319 systemd[1]: Reloading requested from client PID 3095 ('systemctl') (unit session-9.scope)... Jul 6 23:59:12.053338 systemd[1]: Reloading... Jul 6 23:59:12.155280 zram_generator::config[3131]: No configuration found. Jul 6 23:59:12.257175 kubelet[2808]: I0706 23:59:12.257141 2808 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-95" Jul 6 23:59:12.307849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:59:12.409353 systemd[1]: Reloading finished in 355 ms. Jul 6 23:59:12.449174 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:12.467770 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:59:12.468016 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:12.468068 systemd[1]: kubelet.service: Consumed 1.303s CPU time, 128.9M memory peak, 0B memory swap peak. Jul 6 23:59:12.478214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:12.710435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:12.722312 (kubelet)[3195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:59:12.776384 kubelet[3195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:59:12.776384 kubelet[3195]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:59:12.776384 kubelet[3195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:59:12.778133 kubelet[3195]: I0706 23:59:12.776988 3195 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:59:12.785840 kubelet[3195]: I0706 23:59:12.785790 3195 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:59:12.786764 kubelet[3195]: I0706 23:59:12.786749 3195 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:59:12.787307 kubelet[3195]: I0706 23:59:12.787283 3195 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:59:12.788802 kubelet[3195]: I0706 23:59:12.788773 3195 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:59:12.800189 kubelet[3195]: I0706 23:59:12.800150 3195 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:59:12.803324 kubelet[3195]: E0706 23:59:12.803297 3195 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:59:12.803474 kubelet[3195]: I0706 23:59:12.803464 3195 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:59:12.807418 kubelet[3195]: I0706 23:59:12.807386 3195 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:59:12.807915 kubelet[3195]: I0706 23:59:12.807838 3195 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:59:12.808395 kubelet[3195]: I0706 23:59:12.808024 3195 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:59:12.810281 kubelet[3195]: I0706 23:59:12.810252 3195 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:59:12.810393 kubelet[3195]: I0706 23:59:12.810383 3195 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:59:12.811245 kubelet[3195]: I0706 23:59:12.811228 3195 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:59:12.811559 kubelet[3195]: I0706 23:59:12.811547 3195 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:59:12.811925 kubelet[3195]: I0706 23:59:12.811912 3195 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:59:12.812859 kubelet[3195]: I0706 23:59:12.812846 3195 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:59:12.812991 kubelet[3195]: I0706 23:59:12.812978 3195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:59:12.821673 kubelet[3195]: I0706 23:59:12.821643 3195 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:59:12.822891 kubelet[3195]: I0706 23:59:12.822544 3195 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:59:12.825720 kubelet[3195]: I0706 23:59:12.825700 3195 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:59:12.825942 kubelet[3195]: I0706 23:59:12.825930 3195 server.go:1289] "Started kubelet" Jul 6 23:59:12.831124 kubelet[3195]: I0706 23:59:12.831071 3195 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:59:12.837556 kubelet[3195]: I0706 23:59:12.832246 3195 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:59:12.837556 kubelet[3195]: I0706 23:59:12.834143 3195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:59:12.837556 kubelet[3195]: I0706 23:59:12.834508 3195 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:59:12.838826 kubelet[3195]: I0706 23:59:12.838796 3195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:59:12.850948 kubelet[3195]: I0706 23:59:12.850126 3195 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:59:12.853169 kubelet[3195]: I0706 23:59:12.853149 3195 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:59:12.855841 kubelet[3195]: I0706 23:59:12.855128 3195 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:59:12.858594 kubelet[3195]: I0706 23:59:12.858570 3195 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:59:12.858857 kubelet[3195]: I0706 23:59:12.858843 3195 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:59:12.864059 kubelet[3195]: I0706 23:59:12.863161 3195 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:59:12.872534 kubelet[3195]: E0706 23:59:12.872494 3195 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:59:12.876116 kubelet[3195]: I0706 23:59:12.876085 3195 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:59:12.881345 kubelet[3195]: I0706 23:59:12.881305 3195 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:59:12.881487 kubelet[3195]: I0706 23:59:12.881357 3195 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:59:12.881487 kubelet[3195]: I0706 23:59:12.881368 3195 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:59:12.881487 kubelet[3195]: E0706 23:59:12.881426 3195 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:59:12.884116 kubelet[3195]: I0706 23:59:12.884074 3195 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:59:12.884116 kubelet[3195]: I0706 23:59:12.884094 3195 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:59:12.935119 kubelet[3195]: I0706 23:59:12.935091 3195 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:59:12.935119 kubelet[3195]: I0706 23:59:12.935108 3195 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:59:12.935119 kubelet[3195]: I0706 23:59:12.935130 3195 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:59:12.935306 kubelet[3195]: I0706 23:59:12.935260 3195 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:59:12.935306 kubelet[3195]: I0706 23:59:12.935269 3195 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:59:12.935306 kubelet[3195]: I0706 23:59:12.935287 3195 policy_none.go:49] "None policy: Start" Jul 6 23:59:12.935306 kubelet[3195]: I0706 23:59:12.935296 3195 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:59:12.935306 kubelet[3195]: I0706 23:59:12.935304 3195 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:59:12.935426 kubelet[3195]: I0706 23:59:12.935389 3195 state_mem.go:75] "Updated machine memory state" Jul 6 23:59:12.940296 kubelet[3195]: E0706 23:59:12.939718 3195 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:59:12.940296 kubelet[3195]: I0706 23:59:12.939912 3195 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:59:12.940296 kubelet[3195]: I0706 23:59:12.939922 3195 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:59:12.940296 kubelet[3195]: I0706 23:59:12.940123 3195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:59:12.944085 kubelet[3195]: E0706 23:59:12.944059 3195 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:59:12.983599 kubelet[3195]: I0706 23:59:12.983487 3195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-95" Jul 6 23:59:12.984273 kubelet[3195]: I0706 23:59:12.983548 3195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-95" Jul 6 23:59:12.986558 kubelet[3195]: I0706 23:59:12.985882 3195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:12.995409 kubelet[3195]: E0706 23:59:12.995357 3195 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-95\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-95" Jul 6 23:59:13.051566 kubelet[3195]: I0706 23:59:13.051539 3195 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-95" Jul 6 23:59:13.062229 kubelet[3195]: I0706 23:59:13.061936 3195 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-95" Jul 6 23:59:13.062229 kubelet[3195]: I0706 23:59:13.062015 3195 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-95" Jul 6 23:59:13.063626 kubelet[3195]: I0706 23:59:13.062924 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2534a14a83c717fb2ad54dff98187ed5-ca-certs\") pod \"kube-apiserver-ip-172-31-21-95\" (UID: \"2534a14a83c717fb2ad54dff98187ed5\") " pod="kube-system/kube-apiserver-ip-172-31-21-95" Jul 6 23:59:13.063626 kubelet[3195]: I0706 23:59:13.063184 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2534a14a83c717fb2ad54dff98187ed5-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-95\" (UID: \"2534a14a83c717fb2ad54dff98187ed5\") " pod="kube-system/kube-apiserver-ip-172-31-21-95" Jul 6 23:59:13.063626 kubelet[3195]: I0706 23:59:13.063565 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2534a14a83c717fb2ad54dff98187ed5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-95\" (UID: \"2534a14a83c717fb2ad54dff98187ed5\") " pod="kube-system/kube-apiserver-ip-172-31-21-95" Jul 6 23:59:13.063626 kubelet[3195]: I0706 23:59:13.063591 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34ac805e62b370e290b5766c03007555-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-95\" (UID: \"34ac805e62b370e290b5766c03007555\") " pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:13.063773 kubelet[3195]: I0706 23:59:13.063641 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33bb6f7f9087b1ad1530608da711ea09-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-95\" (UID: \"33bb6f7f9087b1ad1530608da711ea09\") " pod="kube-system/kube-scheduler-ip-172-31-21-95" Jul 6 23:59:13.063773 kubelet[3195]: I0706 23:59:13.063658 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34ac805e62b370e290b5766c03007555-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-95\" (UID: \"34ac805e62b370e290b5766c03007555\") " pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:13.063773 kubelet[3195]: I0706 23:59:13.063674 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34ac805e62b370e290b5766c03007555-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-95\" (UID: \"34ac805e62b370e290b5766c03007555\") " pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:13.063856 kubelet[3195]: I0706 23:59:13.063692 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34ac805e62b370e290b5766c03007555-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-95\" (UID: \"34ac805e62b370e290b5766c03007555\") " pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:13.063856 kubelet[3195]: I0706 23:59:13.063807 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34ac805e62b370e290b5766c03007555-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-95\" (UID: \"34ac805e62b370e290b5766c03007555\") " pod="kube-system/kube-controller-manager-ip-172-31-21-95" Jul 6 23:59:13.814801 kubelet[3195]: I0706 23:59:13.813547 3195 apiserver.go:52] "Watching apiserver" Jul 6 23:59:13.861177 kubelet[3195]: I0706 23:59:13.859524 3195 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:59:13.967436 kubelet[3195]: I0706 23:59:13.967370 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-95" podStartSLOduration=1.967351191 podStartE2EDuration="1.967351191s" podCreationTimestamp="2025-07-06 23:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:59:13.956598177 +0000 UTC m=+1.228526562" watchObservedRunningTime="2025-07-06 23:59:13.967351191 +0000 UTC m=+1.239279573" Jul 6 23:59:13.986616 kubelet[3195]: I0706 23:59:13.986093 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-95" podStartSLOduration=1.986059149 podStartE2EDuration="1.986059149s" podCreationTimestamp="2025-07-06 23:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:59:13.967685289 +0000 UTC m=+1.239613636" watchObservedRunningTime="2025-07-06 23:59:13.986059149 +0000 UTC m=+1.257987486" Jul 6 23:59:14.003442 kubelet[3195]: I0706 23:59:14.003372 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-95" podStartSLOduration=2.003352291 podStartE2EDuration="2.003352291s" podCreationTimestamp="2025-07-06 23:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:59:13.986547105 +0000 UTC m=+1.258475452" watchObservedRunningTime="2025-07-06 23:59:14.003352291 +0000 UTC m=+1.275280638" Jul 6 23:59:14.012919 update_engine[1954]: I20250706 23:59:14.010913 1954 update_attempter.cc:509] Updating boot flags... Jul 6 23:59:14.103905 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3256) Jul 6 23:59:14.343906 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3242) Jul 6 23:59:18.240537 kubelet[3195]: I0706 23:59:18.240491 3195 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:59:18.241034 containerd[1975]: time="2025-07-06T23:59:18.240835294Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:59:18.241312 kubelet[3195]: I0706 23:59:18.241033 3195 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:59:18.725804 systemd[1]: Created slice kubepods-besteffort-podd2ca0806_5e2f_4da7_a0cf_690744c30b1c.slice - libcontainer container kubepods-besteffort-podd2ca0806_5e2f_4da7_a0cf_690744c30b1c.slice. Jul 6 23:59:18.803307 kubelet[3195]: I0706 23:59:18.803266 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2ca0806-5e2f-4da7-a0cf-690744c30b1c-xtables-lock\") pod \"kube-proxy-mrzf6\" (UID: \"d2ca0806-5e2f-4da7-a0cf-690744c30b1c\") " pod="kube-system/kube-proxy-mrzf6" Jul 6 23:59:18.803570 kubelet[3195]: I0706 23:59:18.803539 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzk46\" (UniqueName: \"kubernetes.io/projected/d2ca0806-5e2f-4da7-a0cf-690744c30b1c-kube-api-access-lzk46\") pod \"kube-proxy-mrzf6\" (UID: \"d2ca0806-5e2f-4da7-a0cf-690744c30b1c\") " pod="kube-system/kube-proxy-mrzf6" Jul 6 23:59:18.803641 kubelet[3195]: I0706 23:59:18.803619 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2ca0806-5e2f-4da7-a0cf-690744c30b1c-kube-proxy\") pod \"kube-proxy-mrzf6\" (UID: \"d2ca0806-5e2f-4da7-a0cf-690744c30b1c\") " pod="kube-system/kube-proxy-mrzf6" Jul 6 23:59:18.803674 kubelet[3195]: I0706 23:59:18.803648 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2ca0806-5e2f-4da7-a0cf-690744c30b1c-lib-modules\") pod \"kube-proxy-mrzf6\" (UID: \"d2ca0806-5e2f-4da7-a0cf-690744c30b1c\") " pod="kube-system/kube-proxy-mrzf6" Jul 6 23:59:19.036820 containerd[1975]: time="2025-07-06T23:59:19.036709851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mrzf6,Uid:d2ca0806-5e2f-4da7-a0cf-690744c30b1c,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:19.073196 containerd[1975]: time="2025-07-06T23:59:19.072822037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:19.073196 containerd[1975]: time="2025-07-06T23:59:19.072932619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:19.073196 containerd[1975]: time="2025-07-06T23:59:19.072968296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:19.073196 containerd[1975]: time="2025-07-06T23:59:19.073076077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:19.104109 systemd[1]: Started cri-containerd-a76519e2c65df1523e1e6ccbff99402188a8d7408c4b07b7830bd6afb3ab8d3a.scope - libcontainer container a76519e2c65df1523e1e6ccbff99402188a8d7408c4b07b7830bd6afb3ab8d3a. Jul 6 23:59:19.132309 containerd[1975]: time="2025-07-06T23:59:19.132083504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mrzf6,Uid:d2ca0806-5e2f-4da7-a0cf-690744c30b1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a76519e2c65df1523e1e6ccbff99402188a8d7408c4b07b7830bd6afb3ab8d3a\"" Jul 6 23:59:19.142564 containerd[1975]: time="2025-07-06T23:59:19.142398913Z" level=info msg="CreateContainer within sandbox \"a76519e2c65df1523e1e6ccbff99402188a8d7408c4b07b7830bd6afb3ab8d3a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:59:19.175608 containerd[1975]: time="2025-07-06T23:59:19.175541612Z" level=info msg="CreateContainer within sandbox \"a76519e2c65df1523e1e6ccbff99402188a8d7408c4b07b7830bd6afb3ab8d3a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e46f053100451268dc8671be91d96f88b6aba60b068330d2521ea98c069d16d6\"" Jul 6 23:59:19.176329 containerd[1975]: time="2025-07-06T23:59:19.176160882Z" level=info msg="StartContainer for \"e46f053100451268dc8671be91d96f88b6aba60b068330d2521ea98c069d16d6\"" Jul 6 23:59:19.221126 systemd[1]: Started cri-containerd-e46f053100451268dc8671be91d96f88b6aba60b068330d2521ea98c069d16d6.scope - libcontainer container e46f053100451268dc8671be91d96f88b6aba60b068330d2521ea98c069d16d6. Jul 6 23:59:19.375517 containerd[1975]: time="2025-07-06T23:59:19.375341231Z" level=info msg="StartContainer for \"e46f053100451268dc8671be91d96f88b6aba60b068330d2521ea98c069d16d6\" returns successfully" Jul 6 23:59:19.440923 systemd[1]: Created slice kubepods-besteffort-pod957afe95_7151_49bd_838b_f19b3008db34.slice - libcontainer container kubepods-besteffort-pod957afe95_7151_49bd_838b_f19b3008db34.slice. Jul 6 23:59:19.508296 kubelet[3195]: I0706 23:59:19.508251 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4h29\" (UniqueName: \"kubernetes.io/projected/957afe95-7151-49bd-838b-f19b3008db34-kube-api-access-p4h29\") pod \"tigera-operator-747864d56d-5kdp2\" (UID: \"957afe95-7151-49bd-838b-f19b3008db34\") " pod="tigera-operator/tigera-operator-747864d56d-5kdp2" Jul 6 23:59:19.508296 kubelet[3195]: I0706 23:59:19.508304 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/957afe95-7151-49bd-838b-f19b3008db34-var-lib-calico\") pod \"tigera-operator-747864d56d-5kdp2\" (UID: \"957afe95-7151-49bd-838b-f19b3008db34\") " pod="tigera-operator/tigera-operator-747864d56d-5kdp2" Jul 6 23:59:19.747040 containerd[1975]: time="2025-07-06T23:59:19.745810657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-5kdp2,Uid:957afe95-7151-49bd-838b-f19b3008db34,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:59:19.787114 containerd[1975]: time="2025-07-06T23:59:19.785839701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:19.787114 containerd[1975]: time="2025-07-06T23:59:19.786468779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:19.787114 containerd[1975]: time="2025-07-06T23:59:19.786488973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:19.787114 containerd[1975]: time="2025-07-06T23:59:19.786603217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:19.814173 systemd[1]: Started cri-containerd-c6c66a28d33dee1df04912bfd5dee50366f40ee6a45a4b772a78321061bc563e.scope - libcontainer container c6c66a28d33dee1df04912bfd5dee50366f40ee6a45a4b772a78321061bc563e. Jul 6 23:59:19.865397 containerd[1975]: time="2025-07-06T23:59:19.865041063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-5kdp2,Uid:957afe95-7151-49bd-838b-f19b3008db34,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c6c66a28d33dee1df04912bfd5dee50366f40ee6a45a4b772a78321061bc563e\"" Jul 6 23:59:19.870044 containerd[1975]: time="2025-07-06T23:59:19.867844924Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:59:19.919843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2395307760.mount: Deactivated successfully. Jul 6 23:59:19.945454 kubelet[3195]: I0706 23:59:19.945402 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mrzf6" podStartSLOduration=1.945385839 podStartE2EDuration="1.945385839s" podCreationTimestamp="2025-07-06 23:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:59:19.945227703 +0000 UTC m=+7.217156049" watchObservedRunningTime="2025-07-06 23:59:19.945385839 +0000 UTC m=+7.217314185" Jul 6 23:59:21.487481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422727747.mount: Deactivated successfully. Jul 6 23:59:22.326730 containerd[1975]: time="2025-07-06T23:59:22.326680318Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:22.327736 containerd[1975]: time="2025-07-06T23:59:22.327592988Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:59:22.328898 containerd[1975]: time="2025-07-06T23:59:22.328777384Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:22.331179 containerd[1975]: time="2025-07-06T23:59:22.331137200Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:22.331964 containerd[1975]: time="2025-07-06T23:59:22.331762774Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.462105146s" Jul 6 23:59:22.331964 containerd[1975]: time="2025-07-06T23:59:22.331794801Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:59:22.336180 containerd[1975]: time="2025-07-06T23:59:22.336139414Z" level=info msg="CreateContainer within sandbox \"c6c66a28d33dee1df04912bfd5dee50366f40ee6a45a4b772a78321061bc563e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:59:22.359495 containerd[1975]: time="2025-07-06T23:59:22.359441268Z" level=info msg="CreateContainer within sandbox \"c6c66a28d33dee1df04912bfd5dee50366f40ee6a45a4b772a78321061bc563e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9\"" Jul 6 23:59:22.360321 containerd[1975]: time="2025-07-06T23:59:22.360300397Z" level=info msg="StartContainer for \"bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9\"" Jul 6 23:59:22.391104 systemd[1]: Started cri-containerd-bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9.scope - libcontainer container bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9. Jul 6 23:59:22.421432 containerd[1975]: time="2025-07-06T23:59:22.421373948Z" level=info msg="StartContainer for \"bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9\" returns successfully" Jul 6 23:59:22.957048 kubelet[3195]: I0706 23:59:22.956594 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-5kdp2" podStartSLOduration=1.491439234 podStartE2EDuration="3.956576935s" podCreationTimestamp="2025-07-06 23:59:19 +0000 UTC" firstStartedPulling="2025-07-06 23:59:19.867465388 +0000 UTC m=+7.139393728" lastFinishedPulling="2025-07-06 23:59:22.332603101 +0000 UTC m=+9.604531429" observedRunningTime="2025-07-06 23:59:22.956557136 +0000 UTC m=+10.228485483" watchObservedRunningTime="2025-07-06 23:59:22.956576935 +0000 UTC m=+10.228505281" Jul 6 23:59:29.300662 sudo[2325]: pam_unix(sudo:session): session closed for user root Jul 6 23:59:29.328231 sshd[2322]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:29.332339 systemd-logind[1952]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:59:29.334611 systemd[1]: sshd@8-172.31.21.95:22-147.75.109.163:43186.service: Deactivated successfully. Jul 6 23:59:29.338368 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:59:29.338843 systemd[1]: session-9.scope: Consumed 5.994s CPU time, 143.4M memory peak, 0B memory swap peak. Jul 6 23:59:29.340692 systemd-logind[1952]: Removed session 9. Jul 6 23:59:34.912921 systemd[1]: Created slice kubepods-besteffort-podc31de20a_55fb_4088_8829_880d7c715345.slice - libcontainer container kubepods-besteffort-podc31de20a_55fb_4088_8829_880d7c715345.slice. Jul 6 23:59:34.915236 kubelet[3195]: I0706 23:59:34.914783 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c31de20a-55fb-4088-8829-880d7c715345-typha-certs\") pod \"calico-typha-96cb4f8c-798vm\" (UID: \"c31de20a-55fb-4088-8829-880d7c715345\") " pod="calico-system/calico-typha-96cb4f8c-798vm" Jul 6 23:59:34.915236 kubelet[3195]: I0706 23:59:34.914832 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c31de20a-55fb-4088-8829-880d7c715345-tigera-ca-bundle\") pod \"calico-typha-96cb4f8c-798vm\" (UID: \"c31de20a-55fb-4088-8829-880d7c715345\") " pod="calico-system/calico-typha-96cb4f8c-798vm" Jul 6 23:59:34.915236 kubelet[3195]: I0706 23:59:34.914994 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfgjj\" (UniqueName: \"kubernetes.io/projected/c31de20a-55fb-4088-8829-880d7c715345-kube-api-access-zfgjj\") pod \"calico-typha-96cb4f8c-798vm\" (UID: \"c31de20a-55fb-4088-8829-880d7c715345\") " pod="calico-system/calico-typha-96cb4f8c-798vm" Jul 6 23:59:35.230516 containerd[1975]: time="2025-07-06T23:59:35.230470054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-96cb4f8c-798vm,Uid:c31de20a-55fb-4088-8829-880d7c715345,Namespace:calico-system,Attempt:0,}" Jul 6 23:59:35.312049 containerd[1975]: time="2025-07-06T23:59:35.311167553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:35.315166 containerd[1975]: time="2025-07-06T23:59:35.315054428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:35.315166 containerd[1975]: time="2025-07-06T23:59:35.315089430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:35.315508 containerd[1975]: time="2025-07-06T23:59:35.315449013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:35.336732 systemd[1]: Created slice kubepods-besteffort-podf53fbee3_e18a_4e33_9aa9_3de3b60b2e3b.slice - libcontainer container kubepods-besteffort-podf53fbee3_e18a_4e33_9aa9_3de3b60b2e3b.slice. Jul 6 23:59:35.422523 kubelet[3195]: I0706 23:59:35.421083 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-lib-modules\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.422523 kubelet[3195]: I0706 23:59:35.421135 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-node-certs\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.422523 kubelet[3195]: I0706 23:59:35.421162 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-var-lib-calico\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.422523 kubelet[3195]: I0706 23:59:35.421183 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-xtables-lock\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.422523 kubelet[3195]: I0706 23:59:35.421207 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-var-run-calico\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.422845 kubelet[3195]: I0706 23:59:35.421232 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-flexvol-driver-host\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.422845 kubelet[3195]: I0706 23:59:35.421258 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-cni-bin-dir\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.422845 kubelet[3195]: I0706 23:59:35.421283 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-cni-log-dir\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.422845 kubelet[3195]: I0706 23:59:35.421311 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-cni-net-dir\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.422845 kubelet[3195]: I0706 23:59:35.421338 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52q2x\" (UniqueName: \"kubernetes.io/projected/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-kube-api-access-52q2x\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.423058 kubelet[3195]: I0706 23:59:35.421365 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-policysync\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.423058 kubelet[3195]: I0706 23:59:35.421387 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b-tigera-ca-bundle\") pod \"calico-node-lhqmw\" (UID: \"f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b\") " pod="calico-system/calico-node-lhqmw" Jul 6 23:59:35.436106 systemd[1]: Started cri-containerd-bc52b0d9b239425d579cd6d5ad95226d6cb80600911de43820bc781e2d812050.scope - libcontainer container bc52b0d9b239425d579cd6d5ad95226d6cb80600911de43820bc781e2d812050. Jul 6 23:59:35.528632 kubelet[3195]: E0706 23:59:35.528485 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.528632 kubelet[3195]: W0706 23:59:35.528511 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.528919 kubelet[3195]: E0706 23:59:35.528537 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.538336 kubelet[3195]: E0706 23:59:35.538285 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.538336 kubelet[3195]: W0706 23:59:35.538331 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.538525 kubelet[3195]: E0706 23:59:35.538355 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.546759 kubelet[3195]: E0706 23:59:35.546649 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.546759 kubelet[3195]: W0706 23:59:35.546675 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.546759 kubelet[3195]: E0706 23:59:35.546698 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.643139 containerd[1975]: time="2025-07-06T23:59:35.642893779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lhqmw,Uid:f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b,Namespace:calico-system,Attempt:0,}" Jul 6 23:59:35.652145 containerd[1975]: time="2025-07-06T23:59:35.652098870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-96cb4f8c-798vm,Uid:c31de20a-55fb-4088-8829-880d7c715345,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc52b0d9b239425d579cd6d5ad95226d6cb80600911de43820bc781e2d812050\"" Jul 6 23:59:35.655270 containerd[1975]: time="2025-07-06T23:59:35.655233069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:59:35.664578 kubelet[3195]: E0706 23:59:35.664307 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrnkv" podUID="6b84e45b-9676-47c1-bdf6-d1f78bd2c24a" Jul 6 23:59:35.715266 kubelet[3195]: E0706 23:59:35.713975 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.715266 kubelet[3195]: W0706 23:59:35.714001 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.715266 kubelet[3195]: E0706 23:59:35.714025 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.715654 kubelet[3195]: E0706 23:59:35.715377 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.715654 kubelet[3195]: W0706 23:59:35.715393 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.715654 kubelet[3195]: E0706 23:59:35.715411 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.717223 kubelet[3195]: E0706 23:59:35.716323 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.717223 kubelet[3195]: W0706 23:59:35.716340 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.717223 kubelet[3195]: E0706 23:59:35.716356 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.719719 kubelet[3195]: E0706 23:59:35.718140 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.719719 kubelet[3195]: W0706 23:59:35.718157 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.719719 kubelet[3195]: E0706 23:59:35.718176 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.722626 kubelet[3195]: E0706 23:59:35.722127 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.722626 kubelet[3195]: W0706 23:59:35.722148 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.722626 kubelet[3195]: E0706 23:59:35.722168 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.722626 kubelet[3195]: E0706 23:59:35.722418 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.722626 kubelet[3195]: W0706 23:59:35.722428 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.722626 kubelet[3195]: E0706 23:59:35.722440 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.722626 kubelet[3195]: E0706 23:59:35.722635 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.723238 kubelet[3195]: W0706 23:59:35.722645 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.723238 kubelet[3195]: E0706 23:59:35.722656 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.723238 kubelet[3195]: E0706 23:59:35.722908 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.723238 kubelet[3195]: W0706 23:59:35.722918 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.723238 kubelet[3195]: E0706 23:59:35.722931 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.723238 kubelet[3195]: E0706 23:59:35.723167 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.723238 kubelet[3195]: W0706 23:59:35.723176 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.723238 kubelet[3195]: E0706 23:59:35.723189 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.724155 kubelet[3195]: E0706 23:59:35.724131 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.724155 kubelet[3195]: W0706 23:59:35.724152 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.724289 kubelet[3195]: E0706 23:59:35.724168 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.725367 kubelet[3195]: E0706 23:59:35.725345 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.725367 kubelet[3195]: W0706 23:59:35.725364 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.725509 kubelet[3195]: E0706 23:59:35.725379 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.729542 kubelet[3195]: E0706 23:59:35.727074 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.729542 kubelet[3195]: W0706 23:59:35.727091 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.729542 kubelet[3195]: E0706 23:59:35.727107 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.729542 kubelet[3195]: E0706 23:59:35.727768 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.729542 kubelet[3195]: W0706 23:59:35.727779 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.729542 kubelet[3195]: E0706 23:59:35.727792 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.729542 kubelet[3195]: E0706 23:59:35.729368 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.729542 kubelet[3195]: W0706 23:59:35.729382 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.729542 kubelet[3195]: E0706 23:59:35.729398 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.730741 kubelet[3195]: E0706 23:59:35.730724 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.731016 kubelet[3195]: W0706 23:59:35.730999 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.731170 kubelet[3195]: E0706 23:59:35.731114 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.732343 kubelet[3195]: E0706 23:59:35.732327 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.732637 kubelet[3195]: W0706 23:59:35.732473 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.732637 kubelet[3195]: E0706 23:59:35.732495 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.734138 kubelet[3195]: E0706 23:59:35.733995 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.734138 kubelet[3195]: W0706 23:59:35.734017 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.734138 kubelet[3195]: E0706 23:59:35.734032 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.735021 kubelet[3195]: E0706 23:59:35.734734 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.735021 kubelet[3195]: W0706 23:59:35.734748 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.735021 kubelet[3195]: E0706 23:59:35.734764 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.736042 kubelet[3195]: E0706 23:59:35.735583 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.736042 kubelet[3195]: W0706 23:59:35.735598 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.736042 kubelet[3195]: E0706 23:59:35.735613 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.737115 kubelet[3195]: E0706 23:59:35.736618 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.737115 kubelet[3195]: W0706 23:59:35.736634 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.737115 kubelet[3195]: E0706 23:59:35.736652 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.737836 kubelet[3195]: E0706 23:59:35.737808 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.738296 kubelet[3195]: W0706 23:59:35.738032 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.738296 kubelet[3195]: E0706 23:59:35.738056 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.738296 kubelet[3195]: I0706 23:59:35.738190 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6b84e45b-9676-47c1-bdf6-d1f78bd2c24a-registration-dir\") pod \"csi-node-driver-lrnkv\" (UID: \"6b84e45b-9676-47c1-bdf6-d1f78bd2c24a\") " pod="calico-system/csi-node-driver-lrnkv" Jul 6 23:59:35.739588 kubelet[3195]: E0706 23:59:35.739279 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.739588 kubelet[3195]: W0706 23:59:35.739295 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.739588 kubelet[3195]: E0706 23:59:35.739313 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.740338 kubelet[3195]: E0706 23:59:35.740140 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.740338 kubelet[3195]: W0706 23:59:35.740155 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.740338 kubelet[3195]: E0706 23:59:35.740170 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.741265 kubelet[3195]: E0706 23:59:35.741070 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.741265 kubelet[3195]: W0706 23:59:35.741085 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.741265 kubelet[3195]: E0706 23:59:35.741101 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.741265 kubelet[3195]: I0706 23:59:35.741224 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6b84e45b-9676-47c1-bdf6-d1f78bd2c24a-socket-dir\") pod \"csi-node-driver-lrnkv\" (UID: \"6b84e45b-9676-47c1-bdf6-d1f78bd2c24a\") " pod="calico-system/csi-node-driver-lrnkv" Jul 6 23:59:35.742588 kubelet[3195]: E0706 23:59:35.742137 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.742588 kubelet[3195]: W0706 23:59:35.742154 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.742588 kubelet[3195]: E0706 23:59:35.742170 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.742588 kubelet[3195]: I0706 23:59:35.742204 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6b84e45b-9676-47c1-bdf6-d1f78bd2c24a-varrun\") pod \"csi-node-driver-lrnkv\" (UID: \"6b84e45b-9676-47c1-bdf6-d1f78bd2c24a\") " pod="calico-system/csi-node-driver-lrnkv" Jul 6 23:59:35.743898 kubelet[3195]: E0706 23:59:35.743190 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.743898 kubelet[3195]: W0706 23:59:35.743207 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.743898 kubelet[3195]: E0706 23:59:35.743222 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.743898 kubelet[3195]: I0706 23:59:35.743410 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b84e45b-9676-47c1-bdf6-d1f78bd2c24a-kubelet-dir\") pod \"csi-node-driver-lrnkv\" (UID: \"6b84e45b-9676-47c1-bdf6-d1f78bd2c24a\") " pod="calico-system/csi-node-driver-lrnkv" Jul 6 23:59:35.744201 kubelet[3195]: E0706 23:59:35.744156 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.744201 kubelet[3195]: W0706 23:59:35.744172 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.744201 kubelet[3195]: E0706 23:59:35.744186 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.745108 kubelet[3195]: I0706 23:59:35.744692 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmnbw\" (UniqueName: \"kubernetes.io/projected/6b84e45b-9676-47c1-bdf6-d1f78bd2c24a-kube-api-access-hmnbw\") pod \"csi-node-driver-lrnkv\" (UID: \"6b84e45b-9676-47c1-bdf6-d1f78bd2c24a\") " pod="calico-system/csi-node-driver-lrnkv" Jul 6 23:59:35.745361 kubelet[3195]: E0706 23:59:35.745228 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.745361 kubelet[3195]: W0706 23:59:35.745242 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.745361 kubelet[3195]: E0706 23:59:35.745256 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.746478 kubelet[3195]: E0706 23:59:35.746147 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.746478 kubelet[3195]: W0706 23:59:35.746162 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.746478 kubelet[3195]: E0706 23:59:35.746177 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.747447 kubelet[3195]: E0706 23:59:35.746954 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.747447 kubelet[3195]: W0706 23:59:35.746968 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.747447 kubelet[3195]: E0706 23:59:35.746983 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.747833 kubelet[3195]: E0706 23:59:35.747740 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.747833 kubelet[3195]: W0706 23:59:35.747752 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.747833 kubelet[3195]: E0706 23:59:35.747766 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.749252 kubelet[3195]: E0706 23:59:35.748625 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.749252 kubelet[3195]: W0706 23:59:35.748643 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.749252 kubelet[3195]: E0706 23:59:35.748657 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.750191 kubelet[3195]: E0706 23:59:35.749859 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.750191 kubelet[3195]: W0706 23:59:35.749885 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.750191 kubelet[3195]: E0706 23:59:35.749909 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.750930 kubelet[3195]: E0706 23:59:35.750658 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.750930 kubelet[3195]: W0706 23:59:35.750673 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.750930 kubelet[3195]: E0706 23:59:35.750686 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.751711 kubelet[3195]: E0706 23:59:35.751501 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.751711 kubelet[3195]: W0706 23:59:35.751515 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.751711 kubelet[3195]: E0706 23:59:35.751528 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.784543 containerd[1975]: time="2025-07-06T23:59:35.776126497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:35.784543 containerd[1975]: time="2025-07-06T23:59:35.776192516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:35.784543 containerd[1975]: time="2025-07-06T23:59:35.776209199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:35.784543 containerd[1975]: time="2025-07-06T23:59:35.776321879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:35.817336 systemd[1]: Started cri-containerd-ea6cb44a8deed26b3d7386c1b8aa0cf04fa31918b660ee00f614d118b5cb50b8.scope - libcontainer container ea6cb44a8deed26b3d7386c1b8aa0cf04fa31918b660ee00f614d118b5cb50b8. Jul 6 23:59:35.846970 kubelet[3195]: E0706 23:59:35.846697 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.847283 kubelet[3195]: W0706 23:59:35.847141 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.847283 kubelet[3195]: E0706 23:59:35.847172 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.848370 kubelet[3195]: E0706 23:59:35.848108 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.848370 kubelet[3195]: W0706 23:59:35.848127 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.848370 kubelet[3195]: E0706 23:59:35.848159 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.849200 kubelet[3195]: E0706 23:59:35.848922 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.849200 kubelet[3195]: W0706 23:59:35.848938 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.849200 kubelet[3195]: E0706 23:59:35.848972 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.849857 kubelet[3195]: E0706 23:59:35.849758 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.849857 kubelet[3195]: W0706 23:59:35.849770 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.849857 kubelet[3195]: E0706 23:59:35.849783 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.850766 kubelet[3195]: E0706 23:59:35.850629 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.850766 kubelet[3195]: W0706 23:59:35.850647 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.850766 kubelet[3195]: E0706 23:59:35.850661 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.851942 kubelet[3195]: E0706 23:59:35.851300 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.851942 kubelet[3195]: W0706 23:59:35.851314 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.851942 kubelet[3195]: E0706 23:59:35.851328 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.852326 kubelet[3195]: E0706 23:59:35.852178 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.852326 kubelet[3195]: W0706 23:59:35.852192 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.852326 kubelet[3195]: E0706 23:59:35.852205 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.853343 kubelet[3195]: E0706 23:59:35.853083 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.853343 kubelet[3195]: W0706 23:59:35.853098 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.853343 kubelet[3195]: E0706 23:59:35.853112 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.853898 kubelet[3195]: E0706 23:59:35.853674 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.853898 kubelet[3195]: W0706 23:59:35.853688 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.853898 kubelet[3195]: E0706 23:59:35.853702 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.854564 kubelet[3195]: E0706 23:59:35.854385 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.854564 kubelet[3195]: W0706 23:59:35.854399 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.854564 kubelet[3195]: E0706 23:59:35.854417 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.855556 kubelet[3195]: E0706 23:59:35.855317 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.855556 kubelet[3195]: W0706 23:59:35.855332 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.855556 kubelet[3195]: E0706 23:59:35.855346 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.855899 kubelet[3195]: E0706 23:59:35.855774 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.855899 kubelet[3195]: W0706 23:59:35.855790 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.855899 kubelet[3195]: E0706 23:59:35.855803 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.856926 kubelet[3195]: E0706 23:59:35.856728 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.856926 kubelet[3195]: W0706 23:59:35.856743 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.856926 kubelet[3195]: E0706 23:59:35.856757 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.857525 kubelet[3195]: E0706 23:59:35.857286 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.857525 kubelet[3195]: W0706 23:59:35.857301 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.857525 kubelet[3195]: E0706 23:59:35.857315 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.858053 kubelet[3195]: E0706 23:59:35.857845 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.858053 kubelet[3195]: W0706 23:59:35.857859 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.858053 kubelet[3195]: E0706 23:59:35.857977 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.859078 kubelet[3195]: E0706 23:59:35.858625 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.859078 kubelet[3195]: W0706 23:59:35.858641 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.859078 kubelet[3195]: E0706 23:59:35.858655 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.859552 kubelet[3195]: E0706 23:59:35.859416 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.859552 kubelet[3195]: W0706 23:59:35.859432 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.859552 kubelet[3195]: E0706 23:59:35.859446 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.861607 kubelet[3195]: E0706 23:59:35.861480 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.861607 kubelet[3195]: W0706 23:59:35.861496 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.861607 kubelet[3195]: E0706 23:59:35.861513 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.862311 kubelet[3195]: E0706 23:59:35.862117 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.862311 kubelet[3195]: W0706 23:59:35.862133 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.862311 kubelet[3195]: E0706 23:59:35.862148 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.864070 kubelet[3195]: E0706 23:59:35.863922 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.864070 kubelet[3195]: W0706 23:59:35.863938 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.864070 kubelet[3195]: E0706 23:59:35.863953 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.866216 kubelet[3195]: E0706 23:59:35.864921 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.866216 kubelet[3195]: W0706 23:59:35.864937 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.866216 kubelet[3195]: E0706 23:59:35.864952 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.867798 kubelet[3195]: E0706 23:59:35.867671 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.868265 kubelet[3195]: W0706 23:59:35.868246 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.869644 kubelet[3195]: E0706 23:59:35.869425 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.871554 kubelet[3195]: E0706 23:59:35.871079 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.871554 kubelet[3195]: W0706 23:59:35.871095 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.871554 kubelet[3195]: E0706 23:59:35.871112 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.872578 kubelet[3195]: E0706 23:59:35.872371 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.872578 kubelet[3195]: W0706 23:59:35.872388 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.872578 kubelet[3195]: E0706 23:59:35.872405 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.873897 kubelet[3195]: E0706 23:59:35.873671 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.873897 kubelet[3195]: W0706 23:59:35.873694 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.873897 kubelet[3195]: E0706 23:59:35.873714 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.894902 kubelet[3195]: E0706 23:59:35.894500 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:35.894902 kubelet[3195]: W0706 23:59:35.894526 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:35.894902 kubelet[3195]: E0706 23:59:35.894551 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:35.934014 containerd[1975]: time="2025-07-06T23:59:35.933907375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lhqmw,Uid:f53fbee3-e18a-4e33-9aa9-3de3b60b2e3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea6cb44a8deed26b3d7386c1b8aa0cf04fa31918b660ee00f614d118b5cb50b8\"" Jul 6 23:59:37.125310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1290566570.mount: Deactivated successfully. Jul 6 23:59:37.900023 kubelet[3195]: E0706 23:59:37.899503 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrnkv" podUID="6b84e45b-9676-47c1-bdf6-d1f78bd2c24a" Jul 6 23:59:38.643434 containerd[1975]: time="2025-07-06T23:59:38.643385461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:38.645376 containerd[1975]: time="2025-07-06T23:59:38.645291176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 6 23:59:38.651000 containerd[1975]: time="2025-07-06T23:59:38.650915060Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:38.656747 containerd[1975]: time="2025-07-06T23:59:38.656669545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:38.658164 containerd[1975]: time="2025-07-06T23:59:38.657636833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.002353216s" Jul 6 23:59:38.658164 containerd[1975]: time="2025-07-06T23:59:38.657681894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 6 23:59:38.660192 containerd[1975]: time="2025-07-06T23:59:38.660159904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:59:38.707210 containerd[1975]: time="2025-07-06T23:59:38.707095344Z" level=info msg="CreateContainer within sandbox \"bc52b0d9b239425d579cd6d5ad95226d6cb80600911de43820bc781e2d812050\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:59:38.752983 containerd[1975]: time="2025-07-06T23:59:38.752935913Z" level=info msg="CreateContainer within sandbox \"bc52b0d9b239425d579cd6d5ad95226d6cb80600911de43820bc781e2d812050\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0de614fc36b6278a606d925238ef4c7e4e10bdc67021b4bd91fd7fbc541e2257\"" Jul 6 23:59:38.755108 containerd[1975]: time="2025-07-06T23:59:38.754019019Z" level=info msg="StartContainer for \"0de614fc36b6278a606d925238ef4c7e4e10bdc67021b4bd91fd7fbc541e2257\"" Jul 6 23:59:38.818092 systemd[1]: Started cri-containerd-0de614fc36b6278a606d925238ef4c7e4e10bdc67021b4bd91fd7fbc541e2257.scope - libcontainer container 0de614fc36b6278a606d925238ef4c7e4e10bdc67021b4bd91fd7fbc541e2257. Jul 6 23:59:38.883896 containerd[1975]: time="2025-07-06T23:59:38.883700621Z" level=info msg="StartContainer for \"0de614fc36b6278a606d925238ef4c7e4e10bdc67021b4bd91fd7fbc541e2257\" returns successfully" Jul 6 23:59:39.060209 kubelet[3195]: E0706 23:59:39.059999 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.060209 kubelet[3195]: W0706 23:59:39.060032 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.060209 kubelet[3195]: E0706 23:59:39.060058 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.060821 kubelet[3195]: E0706 23:59:39.060323 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.060821 kubelet[3195]: W0706 23:59:39.060337 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.060821 kubelet[3195]: E0706 23:59:39.060354 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.060821 kubelet[3195]: E0706 23:59:39.060580 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.060821 kubelet[3195]: W0706 23:59:39.060591 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.060821 kubelet[3195]: E0706 23:59:39.060602 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.067444 kubelet[3195]: E0706 23:59:39.067410 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.067444 kubelet[3195]: W0706 23:59:39.067442 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.072497 kubelet[3195]: E0706 23:59:39.067469 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.072497 kubelet[3195]: E0706 23:59:39.071060 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.072497 kubelet[3195]: W0706 23:59:39.071081 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.072497 kubelet[3195]: E0706 23:59:39.071105 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.073348 kubelet[3195]: E0706 23:59:39.072935 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.073348 kubelet[3195]: W0706 23:59:39.072954 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.073348 kubelet[3195]: E0706 23:59:39.072973 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.075939 kubelet[3195]: E0706 23:59:39.075714 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.075939 kubelet[3195]: W0706 23:59:39.075738 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.075939 kubelet[3195]: E0706 23:59:39.075760 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.077349 kubelet[3195]: E0706 23:59:39.076397 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.077349 kubelet[3195]: W0706 23:59:39.076417 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.077349 kubelet[3195]: E0706 23:59:39.076434 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.080097 kubelet[3195]: E0706 23:59:39.080076 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.080415 kubelet[3195]: W0706 23:59:39.080184 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.080415 kubelet[3195]: E0706 23:59:39.080209 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.080716 kubelet[3195]: E0706 23:59:39.080703 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.080852 kubelet[3195]: W0706 23:59:39.080787 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.080852 kubelet[3195]: E0706 23:59:39.080804 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.081360 kubelet[3195]: E0706 23:59:39.081215 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.081360 kubelet[3195]: W0706 23:59:39.081229 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.081360 kubelet[3195]: E0706 23:59:39.081292 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.083206 kubelet[3195]: E0706 23:59:39.082649 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.083206 kubelet[3195]: W0706 23:59:39.082665 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.083206 kubelet[3195]: E0706 23:59:39.082678 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.084680 kubelet[3195]: E0706 23:59:39.083448 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.084680 kubelet[3195]: W0706 23:59:39.083459 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.084680 kubelet[3195]: E0706 23:59:39.083473 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.084680 kubelet[3195]: E0706 23:59:39.084033 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.084680 kubelet[3195]: W0706 23:59:39.084045 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.084680 kubelet[3195]: E0706 23:59:39.084058 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.084680 kubelet[3195]: E0706 23:59:39.084479 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.084680 kubelet[3195]: W0706 23:59:39.084490 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.084680 kubelet[3195]: E0706 23:59:39.084503 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.089291 kubelet[3195]: E0706 23:59:39.089029 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.089291 kubelet[3195]: W0706 23:59:39.089057 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.089291 kubelet[3195]: E0706 23:59:39.089078 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.090005 kubelet[3195]: E0706 23:59:39.089820 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.090005 kubelet[3195]: W0706 23:59:39.089856 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.090005 kubelet[3195]: E0706 23:59:39.089923 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.090705 kubelet[3195]: E0706 23:59:39.090478 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.090705 kubelet[3195]: W0706 23:59:39.090493 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.090705 kubelet[3195]: E0706 23:59:39.090507 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.091341 kubelet[3195]: E0706 23:59:39.091099 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.091341 kubelet[3195]: W0706 23:59:39.091114 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.091341 kubelet[3195]: E0706 23:59:39.091128 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.098802 kubelet[3195]: E0706 23:59:39.091465 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.098802 kubelet[3195]: W0706 23:59:39.091477 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.098802 kubelet[3195]: E0706 23:59:39.091490 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.098802 kubelet[3195]: E0706 23:59:39.091746 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.098802 kubelet[3195]: W0706 23:59:39.091757 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.098802 kubelet[3195]: E0706 23:59:39.091769 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.098802 kubelet[3195]: E0706 23:59:39.092669 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.098802 kubelet[3195]: W0706 23:59:39.092682 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.098802 kubelet[3195]: E0706 23:59:39.092696 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.098802 kubelet[3195]: E0706 23:59:39.093041 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.099322 kubelet[3195]: W0706 23:59:39.093065 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.099322 kubelet[3195]: E0706 23:59:39.093089 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.099322 kubelet[3195]: E0706 23:59:39.093523 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.099322 kubelet[3195]: W0706 23:59:39.093535 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.099322 kubelet[3195]: E0706 23:59:39.093549 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.099322 kubelet[3195]: E0706 23:59:39.093789 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.099322 kubelet[3195]: W0706 23:59:39.093801 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.099322 kubelet[3195]: E0706 23:59:39.093816 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.099322 kubelet[3195]: E0706 23:59:39.094139 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.099322 kubelet[3195]: W0706 23:59:39.094150 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.099745 kubelet[3195]: E0706 23:59:39.094163 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.099745 kubelet[3195]: E0706 23:59:39.094472 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.099745 kubelet[3195]: W0706 23:59:39.094483 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.099745 kubelet[3195]: E0706 23:59:39.094516 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.099745 kubelet[3195]: E0706 23:59:39.095347 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.099745 kubelet[3195]: W0706 23:59:39.095367 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.099745 kubelet[3195]: E0706 23:59:39.095382 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.099745 kubelet[3195]: E0706 23:59:39.095909 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.099745 kubelet[3195]: W0706 23:59:39.095920 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.099745 kubelet[3195]: E0706 23:59:39.095933 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.100174 kubelet[3195]: E0706 23:59:39.096304 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.100174 kubelet[3195]: W0706 23:59:39.096316 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.100174 kubelet[3195]: E0706 23:59:39.096328 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.100174 kubelet[3195]: E0706 23:59:39.096659 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.100174 kubelet[3195]: W0706 23:59:39.096670 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.100174 kubelet[3195]: E0706 23:59:39.096683 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.100174 kubelet[3195]: E0706 23:59:39.097440 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.100174 kubelet[3195]: W0706 23:59:39.097454 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.100174 kubelet[3195]: E0706 23:59:39.097468 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.100174 kubelet[3195]: E0706 23:59:39.097938 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:39.100546 kubelet[3195]: W0706 23:59:39.097949 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:39.100546 kubelet[3195]: E0706 23:59:39.097962 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:39.882193 kubelet[3195]: E0706 23:59:39.882133 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrnkv" podUID="6b84e45b-9676-47c1-bdf6-d1f78bd2c24a" Jul 6 23:59:40.002969 kubelet[3195]: I0706 23:59:40.002938 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:59:40.093028 kubelet[3195]: E0706 23:59:40.092764 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.095197 kubelet[3195]: W0706 23:59:40.092787 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.095197 kubelet[3195]: E0706 23:59:40.093413 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.095197 kubelet[3195]: E0706 23:59:40.094201 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.095197 kubelet[3195]: W0706 23:59:40.094212 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.095197 kubelet[3195]: E0706 23:59:40.094227 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.095197 kubelet[3195]: E0706 23:59:40.094402 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.095197 kubelet[3195]: W0706 23:59:40.094408 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.095197 kubelet[3195]: E0706 23:59:40.094416 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.095197 kubelet[3195]: E0706 23:59:40.094574 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.095197 kubelet[3195]: W0706 23:59:40.094580 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.096271 kubelet[3195]: E0706 23:59:40.094587 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.096271 kubelet[3195]: E0706 23:59:40.094743 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.096271 kubelet[3195]: W0706 23:59:40.094749 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.096271 kubelet[3195]: E0706 23:59:40.094756 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.096271 kubelet[3195]: E0706 23:59:40.094926 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.096271 kubelet[3195]: W0706 23:59:40.094933 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.096271 kubelet[3195]: E0706 23:59:40.094942 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.096271 kubelet[3195]: E0706 23:59:40.095098 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.096271 kubelet[3195]: W0706 23:59:40.095104 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.096271 kubelet[3195]: E0706 23:59:40.095111 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.096644 kubelet[3195]: E0706 23:59:40.095257 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.096644 kubelet[3195]: W0706 23:59:40.095263 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.096644 kubelet[3195]: E0706 23:59:40.095269 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.096644 kubelet[3195]: E0706 23:59:40.095444 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.096644 kubelet[3195]: W0706 23:59:40.095450 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.096644 kubelet[3195]: E0706 23:59:40.095457 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.096644 kubelet[3195]: E0706 23:59:40.095602 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.096644 kubelet[3195]: W0706 23:59:40.095607 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.096644 kubelet[3195]: E0706 23:59:40.095614 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.096644 kubelet[3195]: E0706 23:59:40.095753 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.098300 kubelet[3195]: W0706 23:59:40.095759 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.098300 kubelet[3195]: E0706 23:59:40.095765 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.098300 kubelet[3195]: E0706 23:59:40.095996 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.098300 kubelet[3195]: W0706 23:59:40.096006 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.098300 kubelet[3195]: E0706 23:59:40.096018 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.098300 kubelet[3195]: E0706 23:59:40.096599 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.098300 kubelet[3195]: W0706 23:59:40.096607 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.098300 kubelet[3195]: E0706 23:59:40.096617 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.098300 kubelet[3195]: E0706 23:59:40.096769 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.098300 kubelet[3195]: W0706 23:59:40.096775 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.098586 kubelet[3195]: E0706 23:59:40.096782 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.098586 kubelet[3195]: E0706 23:59:40.096978 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.098586 kubelet[3195]: W0706 23:59:40.096984 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.098586 kubelet[3195]: E0706 23:59:40.096990 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.099605 kubelet[3195]: E0706 23:59:40.098987 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.099605 kubelet[3195]: W0706 23:59:40.099009 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.099605 kubelet[3195]: E0706 23:59:40.099024 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.099605 kubelet[3195]: E0706 23:59:40.099252 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.099605 kubelet[3195]: W0706 23:59:40.099259 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.099605 kubelet[3195]: E0706 23:59:40.099267 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.099605 kubelet[3195]: E0706 23:59:40.099504 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.099605 kubelet[3195]: W0706 23:59:40.099512 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.099605 kubelet[3195]: E0706 23:59:40.099521 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.100327 kubelet[3195]: E0706 23:59:40.100014 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.100327 kubelet[3195]: W0706 23:59:40.100024 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.100327 kubelet[3195]: E0706 23:59:40.100034 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.100663 kubelet[3195]: E0706 23:59:40.100629 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.100663 kubelet[3195]: W0706 23:59:40.100640 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.100832 kubelet[3195]: E0706 23:59:40.100744 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.101119 kubelet[3195]: E0706 23:59:40.101044 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.101119 kubelet[3195]: W0706 23:59:40.101057 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.101119 kubelet[3195]: E0706 23:59:40.101068 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.101683 kubelet[3195]: E0706 23:59:40.101585 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.101683 kubelet[3195]: W0706 23:59:40.101616 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.101683 kubelet[3195]: E0706 23:59:40.101626 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.102034 kubelet[3195]: E0706 23:59:40.102024 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.102250 kubelet[3195]: W0706 23:59:40.102219 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.102250 kubelet[3195]: E0706 23:59:40.102235 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.102986 kubelet[3195]: E0706 23:59:40.102564 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.102986 kubelet[3195]: W0706 23:59:40.102574 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.102986 kubelet[3195]: E0706 23:59:40.102584 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.103387 kubelet[3195]: E0706 23:59:40.103301 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.103387 kubelet[3195]: W0706 23:59:40.103311 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.103387 kubelet[3195]: E0706 23:59:40.103321 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.103714 kubelet[3195]: E0706 23:59:40.103704 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.103852 kubelet[3195]: W0706 23:59:40.103780 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.103852 kubelet[3195]: E0706 23:59:40.103793 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.104242 kubelet[3195]: E0706 23:59:40.104174 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.104242 kubelet[3195]: W0706 23:59:40.104184 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.104242 kubelet[3195]: E0706 23:59:40.104194 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.104622 kubelet[3195]: E0706 23:59:40.104510 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.104622 kubelet[3195]: W0706 23:59:40.104520 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.104622 kubelet[3195]: E0706 23:59:40.104529 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.105135 kubelet[3195]: E0706 23:59:40.104846 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.105135 kubelet[3195]: W0706 23:59:40.104854 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.105135 kubelet[3195]: E0706 23:59:40.104863 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.105541 kubelet[3195]: E0706 23:59:40.105390 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.105541 kubelet[3195]: W0706 23:59:40.105400 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.105541 kubelet[3195]: E0706 23:59:40.105413 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.105880 kubelet[3195]: E0706 23:59:40.105759 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.105880 kubelet[3195]: W0706 23:59:40.105769 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.105880 kubelet[3195]: E0706 23:59:40.105778 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.106255 kubelet[3195]: E0706 23:59:40.106088 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.106255 kubelet[3195]: W0706 23:59:40.106097 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.106255 kubelet[3195]: E0706 23:59:40.106106 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.106508 kubelet[3195]: E0706 23:59:40.106472 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:59:40.106508 kubelet[3195]: W0706 23:59:40.106480 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:59:40.106508 kubelet[3195]: E0706 23:59:40.106489 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:59:40.249214 containerd[1975]: time="2025-07-06T23:59:40.249145737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:40.251385 containerd[1975]: time="2025-07-06T23:59:40.251213949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 6 23:59:40.254717 containerd[1975]: time="2025-07-06T23:59:40.253522161Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:40.257369 containerd[1975]: time="2025-07-06T23:59:40.257305974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:40.258237 containerd[1975]: time="2025-07-06T23:59:40.258198732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.598005209s" Jul 6 23:59:40.258237 containerd[1975]: time="2025-07-06T23:59:40.258235198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 6 23:59:40.264759 containerd[1975]: time="2025-07-06T23:59:40.264710276Z" level=info msg="CreateContainer within sandbox \"ea6cb44a8deed26b3d7386c1b8aa0cf04fa31918b660ee00f614d118b5cb50b8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:59:40.294087 containerd[1975]: time="2025-07-06T23:59:40.294027392Z" level=info msg="CreateContainer within sandbox \"ea6cb44a8deed26b3d7386c1b8aa0cf04fa31918b660ee00f614d118b5cb50b8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"298cdd485715b5e97c7f82b03d9e50c4d606fb3d1c144c5a2638c1ebecaa3526\"" Jul 6 23:59:40.294879 containerd[1975]: time="2025-07-06T23:59:40.294831370Z" level=info msg="StartContainer for \"298cdd485715b5e97c7f82b03d9e50c4d606fb3d1c144c5a2638c1ebecaa3526\"" Jul 6 23:59:40.339567 systemd[1]: run-containerd-runc-k8s.io-298cdd485715b5e97c7f82b03d9e50c4d606fb3d1c144c5a2638c1ebecaa3526-runc.Jq4nA7.mount: Deactivated successfully. Jul 6 23:59:40.352114 systemd[1]: Started cri-containerd-298cdd485715b5e97c7f82b03d9e50c4d606fb3d1c144c5a2638c1ebecaa3526.scope - libcontainer container 298cdd485715b5e97c7f82b03d9e50c4d606fb3d1c144c5a2638c1ebecaa3526. Jul 6 23:59:40.391272 containerd[1975]: time="2025-07-06T23:59:40.390640148Z" level=info msg="StartContainer for \"298cdd485715b5e97c7f82b03d9e50c4d606fb3d1c144c5a2638c1ebecaa3526\" returns successfully" Jul 6 23:59:40.405640 systemd[1]: cri-containerd-298cdd485715b5e97c7f82b03d9e50c4d606fb3d1c144c5a2638c1ebecaa3526.scope: Deactivated successfully. Jul 6 23:59:40.474370 containerd[1975]: time="2025-07-06T23:59:40.454765187Z" level=info msg="shim disconnected" id=298cdd485715b5e97c7f82b03d9e50c4d606fb3d1c144c5a2638c1ebecaa3526 namespace=k8s.io Jul 6 23:59:40.474370 containerd[1975]: time="2025-07-06T23:59:40.473636368Z" level=warning msg="cleaning up after shim disconnected" id=298cdd485715b5e97c7f82b03d9e50c4d606fb3d1c144c5a2638c1ebecaa3526 namespace=k8s.io Jul 6 23:59:40.474370 containerd[1975]: time="2025-07-06T23:59:40.473653170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:40.676491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-298cdd485715b5e97c7f82b03d9e50c4d606fb3d1c144c5a2638c1ebecaa3526-rootfs.mount: Deactivated successfully. Jul 6 23:59:41.006723 containerd[1975]: time="2025-07-06T23:59:41.006608984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:59:41.028053 kubelet[3195]: I0706 23:59:41.025271 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-96cb4f8c-798vm" podStartSLOduration=4.019820142 podStartE2EDuration="7.025254888s" podCreationTimestamp="2025-07-06 23:59:34 +0000 UTC" firstStartedPulling="2025-07-06 23:59:35.654392002 +0000 UTC m=+22.926320331" lastFinishedPulling="2025-07-06 23:59:38.659826746 +0000 UTC m=+25.931755077" observedRunningTime="2025-07-06 23:59:39.079155575 +0000 UTC m=+26.351083923" watchObservedRunningTime="2025-07-06 23:59:41.025254888 +0000 UTC m=+28.297183261" Jul 6 23:59:41.882577 kubelet[3195]: E0706 23:59:41.882482 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrnkv" podUID="6b84e45b-9676-47c1-bdf6-d1f78bd2c24a" Jul 6 23:59:43.882891 kubelet[3195]: E0706 23:59:43.882823 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrnkv" podUID="6b84e45b-9676-47c1-bdf6-d1f78bd2c24a" Jul 6 23:59:45.039294 containerd[1975]: time="2025-07-06T23:59:45.039206207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:45.040971 containerd[1975]: time="2025-07-06T23:59:45.040898010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 6 23:59:45.043283 containerd[1975]: time="2025-07-06T23:59:45.043214801Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:45.050929 containerd[1975]: time="2025-07-06T23:59:45.049390718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:45.053358 containerd[1975]: time="2025-07-06T23:59:45.052505672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.045857737s" Jul 6 23:59:45.053358 containerd[1975]: time="2025-07-06T23:59:45.052613832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 6 23:59:45.075413 containerd[1975]: time="2025-07-06T23:59:45.075294539Z" level=info msg="CreateContainer within sandbox \"ea6cb44a8deed26b3d7386c1b8aa0cf04fa31918b660ee00f614d118b5cb50b8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:59:45.194017 containerd[1975]: time="2025-07-06T23:59:45.193967982Z" level=info msg="CreateContainer within sandbox \"ea6cb44a8deed26b3d7386c1b8aa0cf04fa31918b660ee00f614d118b5cb50b8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5da4f488ed4c687f91e4d518efcbae0792387b7d9a2e0a089cd50df54394bcd2\"" Jul 6 23:59:45.196438 containerd[1975]: time="2025-07-06T23:59:45.196371409Z" level=info msg="StartContainer for \"5da4f488ed4c687f91e4d518efcbae0792387b7d9a2e0a089cd50df54394bcd2\"" Jul 6 23:59:45.240545 systemd[1]: Started cri-containerd-5da4f488ed4c687f91e4d518efcbae0792387b7d9a2e0a089cd50df54394bcd2.scope - libcontainer container 5da4f488ed4c687f91e4d518efcbae0792387b7d9a2e0a089cd50df54394bcd2. Jul 6 23:59:45.280219 containerd[1975]: time="2025-07-06T23:59:45.280166113Z" level=info msg="StartContainer for \"5da4f488ed4c687f91e4d518efcbae0792387b7d9a2e0a089cd50df54394bcd2\" returns successfully" Jul 6 23:59:45.882798 kubelet[3195]: E0706 23:59:45.882710 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrnkv" podUID="6b84e45b-9676-47c1-bdf6-d1f78bd2c24a" Jul 6 23:59:46.203382 systemd[1]: cri-containerd-5da4f488ed4c687f91e4d518efcbae0792387b7d9a2e0a089cd50df54394bcd2.scope: Deactivated successfully. Jul 6 23:59:46.237631 kubelet[3195]: I0706 23:59:46.237180 3195 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:59:46.253037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5da4f488ed4c687f91e4d518efcbae0792387b7d9a2e0a089cd50df54394bcd2-rootfs.mount: Deactivated successfully. Jul 6 23:59:46.259026 containerd[1975]: time="2025-07-06T23:59:46.258635306Z" level=info msg="shim disconnected" id=5da4f488ed4c687f91e4d518efcbae0792387b7d9a2e0a089cd50df54394bcd2 namespace=k8s.io Jul 6 23:59:46.259569 containerd[1975]: time="2025-07-06T23:59:46.259079751Z" level=warning msg="cleaning up after shim disconnected" id=5da4f488ed4c687f91e4d518efcbae0792387b7d9a2e0a089cd50df54394bcd2 namespace=k8s.io Jul 6 23:59:46.259569 containerd[1975]: time="2025-07-06T23:59:46.259099460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:46.346958 kubelet[3195]: I0706 23:59:46.346664 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19ccd617-9252-4320-ae1c-b3a2be4963b2-config\") pod \"goldmane-768f4c5c69-5rnc5\" (UID: \"19ccd617-9252-4320-ae1c-b3a2be4963b2\") " pod="calico-system/goldmane-768f4c5c69-5rnc5" Jul 6 23:59:46.346958 kubelet[3195]: I0706 23:59:46.346697 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93b5c195-f2cb-4978-9046-bbb50dfd5a25-config-volume\") pod \"coredns-674b8bbfcf-2s8cg\" (UID: \"93b5c195-f2cb-4978-9046-bbb50dfd5a25\") " pod="kube-system/coredns-674b8bbfcf-2s8cg" Jul 6 23:59:46.346958 kubelet[3195]: I0706 23:59:46.346714 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddkh\" (UniqueName: \"kubernetes.io/projected/4073ee90-8739-4135-b438-25bdb06e58b4-kube-api-access-5ddkh\") pod \"calico-apiserver-78dd578d87-hbf8l\" (UID: \"4073ee90-8739-4135-b438-25bdb06e58b4\") " pod="calico-apiserver/calico-apiserver-78dd578d87-hbf8l" Jul 6 23:59:46.346958 kubelet[3195]: I0706 23:59:46.346734 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5549cc7-b328-4c09-b9a8-a657f9c3b244-tigera-ca-bundle\") pod \"calico-kube-controllers-6868664579-646k8\" (UID: \"f5549cc7-b328-4c09-b9a8-a657f9c3b244\") " pod="calico-system/calico-kube-controllers-6868664579-646k8" Jul 6 23:59:46.346958 kubelet[3195]: I0706 23:59:46.346749 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc9sj\" (UniqueName: \"kubernetes.io/projected/f5549cc7-b328-4c09-b9a8-a657f9c3b244-kube-api-access-lc9sj\") pod \"calico-kube-controllers-6868664579-646k8\" (UID: \"f5549cc7-b328-4c09-b9a8-a657f9c3b244\") " pod="calico-system/calico-kube-controllers-6868664579-646k8" Jul 6 23:59:46.347232 kubelet[3195]: I0706 23:59:46.346767 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/19ccd617-9252-4320-ae1c-b3a2be4963b2-goldmane-key-pair\") pod \"goldmane-768f4c5c69-5rnc5\" (UID: \"19ccd617-9252-4320-ae1c-b3a2be4963b2\") " pod="calico-system/goldmane-768f4c5c69-5rnc5" Jul 6 23:59:46.347232 kubelet[3195]: I0706 23:59:46.346782 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-whisker-backend-key-pair\") pod \"whisker-5475cbb56f-7hvwg\" (UID: \"2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb\") " pod="calico-system/whisker-5475cbb56f-7hvwg" Jul 6 23:59:46.347232 kubelet[3195]: I0706 23:59:46.346796 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk7z9\" (UniqueName: \"kubernetes.io/projected/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-kube-api-access-tk7z9\") pod \"whisker-5475cbb56f-7hvwg\" (UID: \"2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb\") " pod="calico-system/whisker-5475cbb56f-7hvwg" Jul 6 23:59:46.347232 kubelet[3195]: I0706 23:59:46.346814 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6136ec6-ffc6-441a-9474-e2f8829c266e-calico-apiserver-certs\") pod \"calico-apiserver-8484c8784c-78zl4\" (UID: \"e6136ec6-ffc6-441a-9474-e2f8829c266e\") " pod="calico-apiserver/calico-apiserver-8484c8784c-78zl4" Jul 6 23:59:46.347232 kubelet[3195]: I0706 23:59:46.346828 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmm9g\" (UniqueName: \"kubernetes.io/projected/93b5c195-f2cb-4978-9046-bbb50dfd5a25-kube-api-access-bmm9g\") pod \"coredns-674b8bbfcf-2s8cg\" (UID: \"93b5c195-f2cb-4978-9046-bbb50dfd5a25\") " pod="kube-system/coredns-674b8bbfcf-2s8cg" Jul 6 23:59:46.347379 kubelet[3195]: I0706 23:59:46.346844 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw9gh\" (UniqueName: \"kubernetes.io/projected/e6136ec6-ffc6-441a-9474-e2f8829c266e-kube-api-access-xw9gh\") pod \"calico-apiserver-8484c8784c-78zl4\" (UID: \"e6136ec6-ffc6-441a-9474-e2f8829c266e\") " pod="calico-apiserver/calico-apiserver-8484c8784c-78zl4" Jul 6 23:59:46.347379 kubelet[3195]: I0706 23:59:46.346859 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4073ee90-8739-4135-b438-25bdb06e58b4-calico-apiserver-certs\") pod \"calico-apiserver-78dd578d87-hbf8l\" (UID: \"4073ee90-8739-4135-b438-25bdb06e58b4\") " pod="calico-apiserver/calico-apiserver-78dd578d87-hbf8l" Jul 6 23:59:46.347379 kubelet[3195]: I0706 23:59:46.346936 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ccd617-9252-4320-ae1c-b3a2be4963b2-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-5rnc5\" (UID: \"19ccd617-9252-4320-ae1c-b3a2be4963b2\") " pod="calico-system/goldmane-768f4c5c69-5rnc5" Jul 6 23:59:46.347379 kubelet[3195]: I0706 23:59:46.346956 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/09c6b849-d9f2-457c-9d21-c2403e3bc700-calico-apiserver-certs\") pod \"calico-apiserver-78dd578d87-r8llj\" (UID: \"09c6b849-d9f2-457c-9d21-c2403e3bc700\") " pod="calico-apiserver/calico-apiserver-78dd578d87-r8llj" Jul 6 23:59:46.347379 kubelet[3195]: I0706 23:59:46.346999 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11ff9fd-e988-4620-8c05-f0bff4ac262f-config-volume\") pod \"coredns-674b8bbfcf-7m765\" (UID: \"a11ff9fd-e988-4620-8c05-f0bff4ac262f\") " pod="kube-system/coredns-674b8bbfcf-7m765" Jul 6 23:59:46.347512 kubelet[3195]: I0706 23:59:46.347016 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc6pz\" (UniqueName: \"kubernetes.io/projected/a11ff9fd-e988-4620-8c05-f0bff4ac262f-kube-api-access-dc6pz\") pod \"coredns-674b8bbfcf-7m765\" (UID: \"a11ff9fd-e988-4620-8c05-f0bff4ac262f\") " pod="kube-system/coredns-674b8bbfcf-7m765" Jul 6 23:59:46.347512 kubelet[3195]: I0706 23:59:46.347033 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9ht8\" (UniqueName: \"kubernetes.io/projected/09c6b849-d9f2-457c-9d21-c2403e3bc700-kube-api-access-z9ht8\") pod \"calico-apiserver-78dd578d87-r8llj\" (UID: \"09c6b849-d9f2-457c-9d21-c2403e3bc700\") " pod="calico-apiserver/calico-apiserver-78dd578d87-r8llj" Jul 6 23:59:46.347512 kubelet[3195]: I0706 23:59:46.347068 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k5st\" (UniqueName: \"kubernetes.io/projected/19ccd617-9252-4320-ae1c-b3a2be4963b2-kube-api-access-8k5st\") pod \"goldmane-768f4c5c69-5rnc5\" (UID: \"19ccd617-9252-4320-ae1c-b3a2be4963b2\") " pod="calico-system/goldmane-768f4c5c69-5rnc5" Jul 6 23:59:46.347512 kubelet[3195]: I0706 23:59:46.347083 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-whisker-ca-bundle\") pod \"whisker-5475cbb56f-7hvwg\" (UID: \"2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb\") " pod="calico-system/whisker-5475cbb56f-7hvwg" Jul 6 23:59:46.354234 systemd[1]: Created slice kubepods-besteffort-pod2f2d9ccd_ed3f_4f7d_92e5_c508b756bfdb.slice - libcontainer container kubepods-besteffort-pod2f2d9ccd_ed3f_4f7d_92e5_c508b756bfdb.slice. Jul 6 23:59:46.360486 systemd[1]: Created slice kubepods-burstable-pod93b5c195_f2cb_4978_9046_bbb50dfd5a25.slice - libcontainer container kubepods-burstable-pod93b5c195_f2cb_4978_9046_bbb50dfd5a25.slice. Jul 6 23:59:46.369826 systemd[1]: Created slice kubepods-besteffort-podf5549cc7_b328_4c09_b9a8_a657f9c3b244.slice - libcontainer container kubepods-besteffort-podf5549cc7_b328_4c09_b9a8_a657f9c3b244.slice. Jul 6 23:59:46.375826 systemd[1]: Created slice kubepods-burstable-poda11ff9fd_e988_4620_8c05_f0bff4ac262f.slice - libcontainer container kubepods-burstable-poda11ff9fd_e988_4620_8c05_f0bff4ac262f.slice. Jul 6 23:59:46.383428 systemd[1]: Created slice kubepods-besteffort-pod19ccd617_9252_4320_ae1c_b3a2be4963b2.slice - libcontainer container kubepods-besteffort-pod19ccd617_9252_4320_ae1c_b3a2be4963b2.slice. Jul 6 23:59:46.395017 systemd[1]: Created slice kubepods-besteffort-pod4073ee90_8739_4135_b438_25bdb06e58b4.slice - libcontainer container kubepods-besteffort-pod4073ee90_8739_4135_b438_25bdb06e58b4.slice. Jul 6 23:59:46.401636 systemd[1]: Created slice kubepods-besteffort-pod09c6b849_d9f2_457c_9d21_c2403e3bc700.slice - libcontainer container kubepods-besteffort-pod09c6b849_d9f2_457c_9d21_c2403e3bc700.slice. Jul 6 23:59:46.408136 systemd[1]: Created slice kubepods-besteffort-pode6136ec6_ffc6_441a_9474_e2f8829c266e.slice - libcontainer container kubepods-besteffort-pode6136ec6_ffc6_441a_9474_e2f8829c266e.slice. Jul 6 23:59:46.667775 containerd[1975]: time="2025-07-06T23:59:46.667647120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2s8cg,Uid:93b5c195-f2cb-4978-9046-bbb50dfd5a25,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:46.667996 containerd[1975]: time="2025-07-06T23:59:46.667953185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5475cbb56f-7hvwg,Uid:2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb,Namespace:calico-system,Attempt:0,}" Jul 6 23:59:46.674325 containerd[1975]: time="2025-07-06T23:59:46.674277502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6868664579-646k8,Uid:f5549cc7-b328-4c09-b9a8-a657f9c3b244,Namespace:calico-system,Attempt:0,}" Jul 6 23:59:46.680335 containerd[1975]: time="2025-07-06T23:59:46.680280618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7m765,Uid:a11ff9fd-e988-4620-8c05-f0bff4ac262f,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:46.691731 containerd[1975]: time="2025-07-06T23:59:46.691692954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5rnc5,Uid:19ccd617-9252-4320-ae1c-b3a2be4963b2,Namespace:calico-system,Attempt:0,}" Jul 6 23:59:46.702091 containerd[1975]: time="2025-07-06T23:59:46.702043599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78dd578d87-hbf8l,Uid:4073ee90-8739-4135-b438-25bdb06e58b4,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:59:46.706570 containerd[1975]: time="2025-07-06T23:59:46.706528023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78dd578d87-r8llj,Uid:09c6b849-d9f2-457c-9d21-c2403e3bc700,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:59:46.711554 containerd[1975]: time="2025-07-06T23:59:46.711513017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8484c8784c-78zl4,Uid:e6136ec6-ffc6-441a-9474-e2f8829c266e,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:59:47.140731 containerd[1975]: time="2025-07-06T23:59:47.140660175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:59:47.207821 containerd[1975]: time="2025-07-06T23:59:47.207689259Z" level=error msg="Failed to destroy network for sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.212816 containerd[1975]: time="2025-07-06T23:59:47.212735438Z" level=error msg="encountered an error cleaning up failed sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.213164 containerd[1975]: time="2025-07-06T23:59:47.213092599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2s8cg,Uid:93b5c195-f2cb-4978-9046-bbb50dfd5a25,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.227249 containerd[1975]: time="2025-07-06T23:59:47.227190782Z" level=error msg="Failed to destroy network for sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.227893 containerd[1975]: time="2025-07-06T23:59:47.227762741Z" level=error msg="encountered an error cleaning up failed sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.227893 containerd[1975]: time="2025-07-06T23:59:47.227833931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78dd578d87-hbf8l,Uid:4073ee90-8739-4135-b438-25bdb06e58b4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.228858 containerd[1975]: time="2025-07-06T23:59:47.228821513Z" level=error msg="Failed to destroy network for sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.229289 containerd[1975]: time="2025-07-06T23:59:47.229242093Z" level=error msg="encountered an error cleaning up failed sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.229453 containerd[1975]: time="2025-07-06T23:59:47.229311019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5475cbb56f-7hvwg,Uid:2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.243010 kubelet[3195]: E0706 23:59:47.242946 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.243518 kubelet[3195]: E0706 23:59:47.243035 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5475cbb56f-7hvwg" Jul 6 23:59:47.243518 kubelet[3195]: E0706 23:59:47.243075 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5475cbb56f-7hvwg" Jul 6 23:59:47.243518 kubelet[3195]: E0706 23:59:47.243140 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5475cbb56f-7hvwg_calico-system(2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5475cbb56f-7hvwg_calico-system(2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5475cbb56f-7hvwg" podUID="2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb" Jul 6 23:59:47.245644 kubelet[3195]: E0706 23:59:47.231191 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.247436 kubelet[3195]: E0706 23:59:47.245942 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2s8cg" Jul 6 23:59:47.254131 kubelet[3195]: E0706 23:59:47.247187 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2s8cg" Jul 6 23:59:47.254131 kubelet[3195]: E0706 23:59:47.251032 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2s8cg_kube-system(93b5c195-f2cb-4978-9046-bbb50dfd5a25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2s8cg_kube-system(93b5c195-f2cb-4978-9046-bbb50dfd5a25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2s8cg" podUID="93b5c195-f2cb-4978-9046-bbb50dfd5a25" Jul 6 23:59:47.254131 kubelet[3195]: E0706 23:59:47.231260 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.254432 kubelet[3195]: E0706 23:59:47.251913 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78dd578d87-hbf8l" Jul 6 23:59:47.254432 kubelet[3195]: E0706 23:59:47.251940 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78dd578d87-hbf8l" Jul 6 23:59:47.254432 kubelet[3195]: E0706 23:59:47.251992 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78dd578d87-hbf8l_calico-apiserver(4073ee90-8739-4135-b438-25bdb06e58b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78dd578d87-hbf8l_calico-apiserver(4073ee90-8739-4135-b438-25bdb06e58b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78dd578d87-hbf8l" podUID="4073ee90-8739-4135-b438-25bdb06e58b4" Jul 6 23:59:47.297624 containerd[1975]: time="2025-07-06T23:59:47.297542059Z" level=error msg="Failed to destroy network for sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.301407 containerd[1975]: time="2025-07-06T23:59:47.298094970Z" level=error msg="encountered an error cleaning up failed sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.301407 containerd[1975]: time="2025-07-06T23:59:47.298178040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6868664579-646k8,Uid:f5549cc7-b328-4c09-b9a8-a657f9c3b244,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.306546 kubelet[3195]: E0706 23:59:47.304767 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.306546 kubelet[3195]: E0706 23:59:47.304835 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6868664579-646k8" Jul 6 23:59:47.306546 kubelet[3195]: E0706 23:59:47.306119 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6868664579-646k8" Jul 6 23:59:47.306802 kubelet[3195]: E0706 23:59:47.306237 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6868664579-646k8_calico-system(f5549cc7-b328-4c09-b9a8-a657f9c3b244)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6868664579-646k8_calico-system(f5549cc7-b328-4c09-b9a8-a657f9c3b244)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6868664579-646k8" podUID="f5549cc7-b328-4c09-b9a8-a657f9c3b244" Jul 6 23:59:47.307482 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5-shm.mount: Deactivated successfully. Jul 6 23:59:47.314781 containerd[1975]: time="2025-07-06T23:59:47.314587957Z" level=error msg="Failed to destroy network for sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.314929 containerd[1975]: time="2025-07-06T23:59:47.314597111Z" level=error msg="Failed to destroy network for sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.317484 containerd[1975]: time="2025-07-06T23:59:47.317195674Z" level=error msg="encountered an error cleaning up failed sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.317484 containerd[1975]: time="2025-07-06T23:59:47.317280288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78dd578d87-r8llj,Uid:09c6b849-d9f2-457c-9d21-c2403e3bc700,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.317484 containerd[1975]: time="2025-07-06T23:59:47.317221464Z" level=error msg="encountered an error cleaning up failed sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.317484 containerd[1975]: time="2025-07-06T23:59:47.317421852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7m765,Uid:a11ff9fd-e988-4620-8c05-f0bff4ac262f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.317844 kubelet[3195]: E0706 23:59:47.317549 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.317844 kubelet[3195]: E0706 23:59:47.317616 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78dd578d87-r8llj" Jul 6 23:59:47.317844 kubelet[3195]: E0706 23:59:47.317643 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78dd578d87-r8llj" Jul 6 23:59:47.321603 kubelet[3195]: E0706 23:59:47.317703 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78dd578d87-r8llj_calico-apiserver(09c6b849-d9f2-457c-9d21-c2403e3bc700)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78dd578d87-r8llj_calico-apiserver(09c6b849-d9f2-457c-9d21-c2403e3bc700)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78dd578d87-r8llj" podUID="09c6b849-d9f2-457c-9d21-c2403e3bc700" Jul 6 23:59:47.321603 kubelet[3195]: E0706 23:59:47.318327 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.321603 kubelet[3195]: E0706 23:59:47.318380 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7m765" Jul 6 23:59:47.321819 containerd[1975]: time="2025-07-06T23:59:47.320445918Z" level=error msg="Failed to destroy network for sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.321898 kubelet[3195]: E0706 23:59:47.318406 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7m765" Jul 6 23:59:47.321898 kubelet[3195]: E0706 23:59:47.318460 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7m765_kube-system(a11ff9fd-e988-4620-8c05-f0bff4ac262f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7m765_kube-system(a11ff9fd-e988-4620-8c05-f0bff4ac262f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7m765" podUID="a11ff9fd-e988-4620-8c05-f0bff4ac262f" Jul 6 23:59:47.325480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1-shm.mount: Deactivated successfully. Jul 6 23:59:47.325623 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5-shm.mount: Deactivated successfully. Jul 6 23:59:47.335779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328-shm.mount: Deactivated successfully. Jul 6 23:59:47.343675 containerd[1975]: time="2025-07-06T23:59:47.321459064Z" level=error msg="encountered an error cleaning up failed sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.346769 containerd[1975]: time="2025-07-06T23:59:47.343707847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5rnc5,Uid:19ccd617-9252-4320-ae1c-b3a2be4963b2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.346769 containerd[1975]: time="2025-07-06T23:59:47.332298944Z" level=error msg="Failed to destroy network for sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.346769 containerd[1975]: time="2025-07-06T23:59:47.344181260Z" level=error msg="encountered an error cleaning up failed sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.346769 containerd[1975]: time="2025-07-06T23:59:47.344229828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8484c8784c-78zl4,Uid:e6136ec6-ffc6-441a-9474-e2f8829c266e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.347236 kubelet[3195]: E0706 23:59:47.343965 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.347236 kubelet[3195]: E0706 23:59:47.344026 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-5rnc5" Jul 6 23:59:47.347236 kubelet[3195]: E0706 23:59:47.344054 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-5rnc5" Jul 6 23:59:47.347468 kubelet[3195]: E0706 23:59:47.344122 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-5rnc5_calico-system(19ccd617-9252-4320-ae1c-b3a2be4963b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-5rnc5_calico-system(19ccd617-9252-4320-ae1c-b3a2be4963b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-5rnc5" podUID="19ccd617-9252-4320-ae1c-b3a2be4963b2" Jul 6 23:59:47.347468 kubelet[3195]: E0706 23:59:47.346284 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.347468 kubelet[3195]: E0706 23:59:47.346351 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8484c8784c-78zl4" Jul 6 23:59:47.347757 kubelet[3195]: E0706 23:59:47.346408 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8484c8784c-78zl4" Jul 6 23:59:47.347757 kubelet[3195]: E0706 23:59:47.346499 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8484c8784c-78zl4_calico-apiserver(e6136ec6-ffc6-441a-9474-e2f8829c266e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8484c8784c-78zl4_calico-apiserver(e6136ec6-ffc6-441a-9474-e2f8829c266e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8484c8784c-78zl4" podUID="e6136ec6-ffc6-441a-9474-e2f8829c266e" Jul 6 23:59:47.350515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116-shm.mount: Deactivated successfully. Jul 6 23:59:47.888994 systemd[1]: Created slice kubepods-besteffort-pod6b84e45b_9676_47c1_bdf6_d1f78bd2c24a.slice - libcontainer container kubepods-besteffort-pod6b84e45b_9676_47c1_bdf6_d1f78bd2c24a.slice. Jul 6 23:59:47.892576 containerd[1975]: time="2025-07-06T23:59:47.892531234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lrnkv,Uid:6b84e45b-9676-47c1-bdf6-d1f78bd2c24a,Namespace:calico-system,Attempt:0,}" Jul 6 23:59:47.981045 containerd[1975]: time="2025-07-06T23:59:47.980989362Z" level=error msg="Failed to destroy network for sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.981625 containerd[1975]: time="2025-07-06T23:59:47.981576304Z" level=error msg="encountered an error cleaning up failed sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.981744 containerd[1975]: time="2025-07-06T23:59:47.981643498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lrnkv,Uid:6b84e45b-9676-47c1-bdf6-d1f78bd2c24a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.981965 kubelet[3195]: E0706 23:59:47.981914 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:47.982052 kubelet[3195]: E0706 23:59:47.981976 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lrnkv" Jul 6 23:59:47.982052 kubelet[3195]: E0706 23:59:47.982021 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lrnkv" Jul 6 23:59:47.982408 kubelet[3195]: E0706 23:59:47.982109 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lrnkv_calico-system(6b84e45b-9676-47c1-bdf6-d1f78bd2c24a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lrnkv_calico-system(6b84e45b-9676-47c1-bdf6-d1f78bd2c24a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lrnkv" podUID="6b84e45b-9676-47c1-bdf6-d1f78bd2c24a" Jul 6 23:59:48.089930 kubelet[3195]: I0706 23:59:48.089698 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 6 23:59:48.092262 kubelet[3195]: I0706 23:59:48.092221 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 6 23:59:48.099653 kubelet[3195]: I0706 23:59:48.099577 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 6 23:59:48.103741 kubelet[3195]: I0706 23:59:48.103662 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 6 23:59:48.122146 kubelet[3195]: I0706 23:59:48.122108 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 6 23:59:48.131505 containerd[1975]: time="2025-07-06T23:59:48.131456473Z" level=info msg="StopPodSandbox for \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\"" Jul 6 23:59:48.133852 containerd[1975]: time="2025-07-06T23:59:48.132668476Z" level=info msg="StopPodSandbox for \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\"" Jul 6 23:59:48.133852 containerd[1975]: time="2025-07-06T23:59:48.133833589Z" level=info msg="Ensure that sandbox 186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5 in task-service has been cleanup successfully" Jul 6 23:59:48.134098 containerd[1975]: time="2025-07-06T23:59:48.134073303Z" level=info msg="Ensure that sandbox adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116 in task-service has been cleanup successfully" Jul 6 23:59:48.136076 containerd[1975]: time="2025-07-06T23:59:48.136037667Z" level=info msg="StopPodSandbox for \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\"" Jul 6 23:59:48.136266 containerd[1975]: time="2025-07-06T23:59:48.136242264Z" level=info msg="Ensure that sandbox 3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613 in task-service has been cleanup successfully" Jul 6 23:59:48.136679 containerd[1975]: time="2025-07-06T23:59:48.136646591Z" level=info msg="StopPodSandbox for \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\"" Jul 6 23:59:48.137788 containerd[1975]: time="2025-07-06T23:59:48.136851941Z" level=info msg="Ensure that sandbox ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154 in task-service has been cleanup successfully" Jul 6 23:59:48.140092 kubelet[3195]: I0706 23:59:48.139984 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 6 23:59:48.140706 containerd[1975]: time="2025-07-06T23:59:48.140667415Z" level=info msg="StopPodSandbox for \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\"" Jul 6 23:59:48.141261 containerd[1975]: time="2025-07-06T23:59:48.140908487Z" level=info msg="Ensure that sandbox 1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3 in task-service has been cleanup successfully" Jul 6 23:59:48.146074 containerd[1975]: time="2025-07-06T23:59:48.146023838Z" level=info msg="StopPodSandbox for \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\"" Jul 6 23:59:48.156145 containerd[1975]: time="2025-07-06T23:59:48.156101716Z" level=info msg="Ensure that sandbox df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1 in task-service has been cleanup successfully" Jul 6 23:59:48.173053 kubelet[3195]: I0706 23:59:48.173022 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 6 23:59:48.174803 containerd[1975]: time="2025-07-06T23:59:48.174751794Z" level=info msg="StopPodSandbox for \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\"" Jul 6 23:59:48.175033 containerd[1975]: time="2025-07-06T23:59:48.175006507Z" level=info msg="Ensure that sandbox a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328 in task-service has been cleanup successfully" Jul 6 23:59:48.190316 kubelet[3195]: I0706 23:59:48.190286 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 6 23:59:48.196737 containerd[1975]: time="2025-07-06T23:59:48.196695580Z" level=info msg="StopPodSandbox for \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\"" Jul 6 23:59:48.197101 containerd[1975]: time="2025-07-06T23:59:48.197006341Z" level=info msg="Ensure that sandbox c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5 in task-service has been cleanup successfully" Jul 6 23:59:48.205435 kubelet[3195]: I0706 23:59:48.205401 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 6 23:59:48.214927 containerd[1975]: time="2025-07-06T23:59:48.214885540Z" level=info msg="StopPodSandbox for \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\"" Jul 6 23:59:48.215138 containerd[1975]: time="2025-07-06T23:59:48.215110979Z" level=info msg="Ensure that sandbox 80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4 in task-service has been cleanup successfully" Jul 6 23:59:48.256261 containerd[1975]: time="2025-07-06T23:59:48.256181177Z" level=error msg="StopPodSandbox for \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\" failed" error="failed to destroy network for sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:48.260690 kubelet[3195]: E0706 23:59:48.258611 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 6 23:59:48.262156 kubelet[3195]: E0706 23:59:48.261969 3195 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5"} Jul 6 23:59:48.262156 kubelet[3195]: E0706 23:59:48.262072 3195 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a11ff9fd-e988-4620-8c05-f0bff4ac262f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:59:48.262156 kubelet[3195]: E0706 23:59:48.262114 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a11ff9fd-e988-4620-8c05-f0bff4ac262f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7m765" podUID="a11ff9fd-e988-4620-8c05-f0bff4ac262f" Jul 6 23:59:48.315427 containerd[1975]: time="2025-07-06T23:59:48.315255189Z" level=error msg="StopPodSandbox for \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\" failed" error="failed to destroy network for sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:48.315888 kubelet[3195]: E0706 23:59:48.315520 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 6 23:59:48.315888 kubelet[3195]: E0706 23:59:48.315582 3195 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613"} Jul 6 23:59:48.315888 kubelet[3195]: E0706 23:59:48.315624 3195 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93b5c195-f2cb-4978-9046-bbb50dfd5a25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:59:48.315888 kubelet[3195]: E0706 23:59:48.315657 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93b5c195-f2cb-4978-9046-bbb50dfd5a25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2s8cg" podUID="93b5c195-f2cb-4978-9046-bbb50dfd5a25" Jul 6 23:59:48.332799 containerd[1975]: time="2025-07-06T23:59:48.332661531Z" level=error msg="StopPodSandbox for \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\" failed" error="failed to destroy network for sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:48.333038 kubelet[3195]: E0706 23:59:48.332963 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 6 23:59:48.333114 kubelet[3195]: E0706 23:59:48.333064 3195 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5"} Jul 6 23:59:48.333167 kubelet[3195]: E0706 23:59:48.333107 3195 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5549cc7-b328-4c09-b9a8-a657f9c3b244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:59:48.333256 kubelet[3195]: E0706 23:59:48.333156 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5549cc7-b328-4c09-b9a8-a657f9c3b244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6868664579-646k8" podUID="f5549cc7-b328-4c09-b9a8-a657f9c3b244" Jul 6 23:59:48.381225 containerd[1975]: time="2025-07-06T23:59:48.381157622Z" level=error msg="StopPodSandbox for \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\" failed" error="failed to destroy network for sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:48.381788 kubelet[3195]: E0706 23:59:48.381437 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 6 23:59:48.381788 kubelet[3195]: E0706 23:59:48.381495 3195 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154"} Jul 6 23:59:48.381788 kubelet[3195]: E0706 23:59:48.381541 3195 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4073ee90-8739-4135-b438-25bdb06e58b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:59:48.381788 kubelet[3195]: E0706 23:59:48.381573 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4073ee90-8739-4135-b438-25bdb06e58b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78dd578d87-hbf8l" podUID="4073ee90-8739-4135-b438-25bdb06e58b4" Jul 6 23:59:48.408284 containerd[1975]: time="2025-07-06T23:59:48.408147716Z" level=error msg="StopPodSandbox for \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\" failed" error="failed to destroy network for sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:48.408996 kubelet[3195]: E0706 23:59:48.408672 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 6 23:59:48.408996 kubelet[3195]: E0706 23:59:48.408768 3195 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116"} Jul 6 23:59:48.408996 kubelet[3195]: E0706 23:59:48.408851 3195 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e6136ec6-ffc6-441a-9474-e2f8829c266e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:59:48.408996 kubelet[3195]: E0706 23:59:48.408910 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e6136ec6-ffc6-441a-9474-e2f8829c266e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8484c8784c-78zl4" podUID="e6136ec6-ffc6-441a-9474-e2f8829c266e" Jul 6 23:59:48.418272 containerd[1975]: time="2025-07-06T23:59:48.418021954Z" level=error msg="StopPodSandbox for \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\" failed" error="failed to destroy network for sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:48.418272 containerd[1975]: time="2025-07-06T23:59:48.418191275Z" level=error msg="StopPodSandbox for \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\" failed" error="failed to destroy network for sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:48.418436 kubelet[3195]: E0706 23:59:48.418397 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 6 23:59:48.418520 kubelet[3195]: E0706 23:59:48.418452 3195 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328"} Jul 6 23:59:48.418520 kubelet[3195]: E0706 23:59:48.418493 3195 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19ccd617-9252-4320-ae1c-b3a2be4963b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:59:48.418702 kubelet[3195]: E0706 23:59:48.418525 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19ccd617-9252-4320-ae1c-b3a2be4963b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-5rnc5" podUID="19ccd617-9252-4320-ae1c-b3a2be4963b2" Jul 6 23:59:48.418702 kubelet[3195]: E0706 23:59:48.418565 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 6 23:59:48.418702 kubelet[3195]: E0706 23:59:48.418587 3195 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3"} Jul 6 23:59:48.418702 kubelet[3195]: E0706 23:59:48.418612 3195 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b84e45b-9676-47c1-bdf6-d1f78bd2c24a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:59:48.418897 kubelet[3195]: E0706 23:59:48.418640 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b84e45b-9676-47c1-bdf6-d1f78bd2c24a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lrnkv" podUID="6b84e45b-9676-47c1-bdf6-d1f78bd2c24a" Jul 6 23:59:48.421075 containerd[1975]: time="2025-07-06T23:59:48.421033401Z" level=error msg="StopPodSandbox for \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\" failed" error="failed to destroy network for sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:48.421592 kubelet[3195]: E0706 23:59:48.421359 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 6 23:59:48.421592 kubelet[3195]: E0706 23:59:48.421475 3195 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1"} Jul 6 23:59:48.421592 kubelet[3195]: E0706 23:59:48.421522 3195 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09c6b849-d9f2-457c-9d21-c2403e3bc700\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:59:48.421592 kubelet[3195]: E0706 23:59:48.421555 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09c6b849-d9f2-457c-9d21-c2403e3bc700\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78dd578d87-r8llj" podUID="09c6b849-d9f2-457c-9d21-c2403e3bc700" Jul 6 23:59:48.422556 containerd[1975]: time="2025-07-06T23:59:48.422257432Z" level=error msg="StopPodSandbox for \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\" failed" error="failed to destroy network for sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:59:48.422647 kubelet[3195]: E0706 23:59:48.422449 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 6 23:59:48.422647 kubelet[3195]: E0706 23:59:48.422520 3195 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4"} Jul 6 23:59:48.422862 kubelet[3195]: E0706 23:59:48.422781 3195 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:59:48.422862 kubelet[3195]: E0706 23:59:48.422818 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5475cbb56f-7hvwg" podUID="2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb" Jul 6 23:59:53.688981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576718153.mount: Deactivated successfully. Jul 6 23:59:53.771392 containerd[1975]: time="2025-07-06T23:59:53.771322723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:53.780411 containerd[1975]: time="2025-07-06T23:59:53.762617294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 6 23:59:53.819049 containerd[1975]: time="2025-07-06T23:59:53.818998640Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:53.820241 containerd[1975]: time="2025-07-06T23:59:53.819778768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.679056648s" Jul 6 23:59:53.820241 containerd[1975]: time="2025-07-06T23:59:53.819825826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 6 23:59:53.820449 containerd[1975]: time="2025-07-06T23:59:53.820343554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:53.868028 containerd[1975]: time="2025-07-06T23:59:53.867967583Z" level=info msg="CreateContainer within sandbox \"ea6cb44a8deed26b3d7386c1b8aa0cf04fa31918b660ee00f614d118b5cb50b8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:59:53.931548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676853353.mount: Deactivated successfully. Jul 6 23:59:53.945838 containerd[1975]: time="2025-07-06T23:59:53.944464561Z" level=info msg="CreateContainer within sandbox \"ea6cb44a8deed26b3d7386c1b8aa0cf04fa31918b660ee00f614d118b5cb50b8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"389de9738b6dd6c4cf1e58967c70986343b93dc4db688d04e182e532315bc447\"" Jul 6 23:59:53.945838 containerd[1975]: time="2025-07-06T23:59:53.945343216Z" level=info msg="StartContainer for \"389de9738b6dd6c4cf1e58967c70986343b93dc4db688d04e182e532315bc447\"" Jul 6 23:59:54.132075 systemd[1]: Started cri-containerd-389de9738b6dd6c4cf1e58967c70986343b93dc4db688d04e182e532315bc447.scope - libcontainer container 389de9738b6dd6c4cf1e58967c70986343b93dc4db688d04e182e532315bc447. Jul 6 23:59:54.173230 containerd[1975]: time="2025-07-06T23:59:54.173185274Z" level=info msg="StartContainer for \"389de9738b6dd6c4cf1e58967c70986343b93dc4db688d04e182e532315bc447\" returns successfully" Jul 6 23:59:54.362024 kubelet[3195]: I0706 23:59:54.355218 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lhqmw" podStartSLOduration=1.42676904 podStartE2EDuration="19.330547253s" podCreationTimestamp="2025-07-06 23:59:35 +0000 UTC" firstStartedPulling="2025-07-06 23:59:35.938302028 +0000 UTC m=+23.210230365" lastFinishedPulling="2025-07-06 23:59:53.842080242 +0000 UTC m=+41.114008578" observedRunningTime="2025-07-06 23:59:54.328195737 +0000 UTC m=+41.600124083" watchObservedRunningTime="2025-07-06 23:59:54.330547253 +0000 UTC m=+41.602475602" Jul 6 23:59:54.399966 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:59:54.400969 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:59:54.779625 containerd[1975]: time="2025-07-06T23:59:54.779206221Z" level=info msg="StopPodSandbox for \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\"" Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:54.910 [INFO][4668] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:54.910 [INFO][4668] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" iface="eth0" netns="/var/run/netns/cni-9ef59bfe-c4e5-8a59-722c-7cc28de496b2" Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:54.911 [INFO][4668] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" iface="eth0" netns="/var/run/netns/cni-9ef59bfe-c4e5-8a59-722c-7cc28de496b2" Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:54.912 [INFO][4668] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" iface="eth0" netns="/var/run/netns/cni-9ef59bfe-c4e5-8a59-722c-7cc28de496b2" Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:54.912 [INFO][4668] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:54.912 [INFO][4668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:55.267 [INFO][4680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" HandleID="k8s-pod-network.80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Workload="ip--172--31--21--95-k8s-whisker--5475cbb56f--7hvwg-eth0" Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:55.271 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:55.272 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:55.288 [WARNING][4680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" HandleID="k8s-pod-network.80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Workload="ip--172--31--21--95-k8s-whisker--5475cbb56f--7hvwg-eth0" Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:55.288 [INFO][4680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" HandleID="k8s-pod-network.80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Workload="ip--172--31--21--95-k8s-whisker--5475cbb56f--7hvwg-eth0" Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:55.291 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:55.295793 containerd[1975]: 2025-07-06 23:59:55.293 [INFO][4668] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 6 23:59:55.298990 containerd[1975]: time="2025-07-06T23:59:55.297796201Z" level=info msg="TearDown network for sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\" successfully" Jul 6 23:59:55.298990 containerd[1975]: time="2025-07-06T23:59:55.297823726Z" level=info msg="StopPodSandbox for \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\" returns successfully" Jul 6 23:59:55.300836 systemd[1]: run-netns-cni\x2d9ef59bfe\x2dc4e5\x2d8a59\x2d722c\x2d7cc28de496b2.mount: Deactivated successfully. Jul 6 23:59:55.306479 kubelet[3195]: I0706 23:59:55.306448 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:59:55.393563 kubelet[3195]: I0706 23:59:55.393338 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk7z9\" (UniqueName: \"kubernetes.io/projected/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-kube-api-access-tk7z9\") pod \"2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb\" (UID: \"2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb\") " Jul 6 23:59:55.393563 kubelet[3195]: I0706 23:59:55.393547 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-whisker-ca-bundle\") pod \"2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb\" (UID: \"2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb\") " Jul 6 23:59:55.394716 kubelet[3195]: I0706 23:59:55.393596 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-whisker-backend-key-pair\") pod \"2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb\" (UID: \"2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb\") " Jul 6 23:59:55.427262 kubelet[3195]: I0706 23:59:55.427078 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb" (UID: "2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:59:55.427551 systemd[1]: var-lib-kubelet-pods-2f2d9ccd\x2ded3f\x2d4f7d\x2d92e5\x2dc508b756bfdb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:59:55.428608 kubelet[3195]: I0706 23:59:55.428556 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-kube-api-access-tk7z9" (OuterVolumeSpecName: "kube-api-access-tk7z9") pod "2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb" (UID: "2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb"). InnerVolumeSpecName "kube-api-access-tk7z9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:59:55.431028 systemd[1]: var-lib-kubelet-pods-2f2d9ccd\x2ded3f\x2d4f7d\x2d92e5\x2dc508b756bfdb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtk7z9.mount: Deactivated successfully. Jul 6 23:59:55.434458 kubelet[3195]: I0706 23:59:55.422424 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb" (UID: "2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:59:55.494532 kubelet[3195]: I0706 23:59:55.494486 3195 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-whisker-backend-key-pair\") on node \"ip-172-31-21-95\" DevicePath \"\"" Jul 6 23:59:55.494532 kubelet[3195]: I0706 23:59:55.494522 3195 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tk7z9\" (UniqueName: \"kubernetes.io/projected/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-kube-api-access-tk7z9\") on node \"ip-172-31-21-95\" DevicePath \"\"" Jul 6 23:59:55.494532 kubelet[3195]: I0706 23:59:55.494535 3195 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb-whisker-ca-bundle\") on node \"ip-172-31-21-95\" DevicePath \"\"" Jul 6 23:59:56.375659 systemd[1]: Removed slice kubepods-besteffort-pod2f2d9ccd_ed3f_4f7d_92e5_c508b756bfdb.slice - libcontainer container kubepods-besteffort-pod2f2d9ccd_ed3f_4f7d_92e5_c508b756bfdb.slice. Jul 6 23:59:56.532691 systemd[1]: Created slice kubepods-besteffort-pod9d8f8675_847a_473f_9615_86a34ddf702c.slice - libcontainer container kubepods-besteffort-pod9d8f8675_847a_473f_9615_86a34ddf702c.slice. Jul 6 23:59:56.600610 kubelet[3195]: I0706 23:59:56.600571 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs65h\" (UniqueName: \"kubernetes.io/projected/9d8f8675-847a-473f-9615-86a34ddf702c-kube-api-access-cs65h\") pod \"whisker-69d69f759-8mmlz\" (UID: \"9d8f8675-847a-473f-9615-86a34ddf702c\") " pod="calico-system/whisker-69d69f759-8mmlz" Jul 6 23:59:56.601023 kubelet[3195]: I0706 23:59:56.600692 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8f8675-847a-473f-9615-86a34ddf702c-whisker-ca-bundle\") pod \"whisker-69d69f759-8mmlz\" (UID: \"9d8f8675-847a-473f-9615-86a34ddf702c\") " pod="calico-system/whisker-69d69f759-8mmlz" Jul 6 23:59:56.601023 kubelet[3195]: I0706 23:59:56.600739 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9d8f8675-847a-473f-9615-86a34ddf702c-whisker-backend-key-pair\") pod \"whisker-69d69f759-8mmlz\" (UID: \"9d8f8675-847a-473f-9615-86a34ddf702c\") " pod="calico-system/whisker-69d69f759-8mmlz" Jul 6 23:59:56.836847 containerd[1975]: time="2025-07-06T23:59:56.836806148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69d69f759-8mmlz,Uid:9d8f8675-847a-473f-9615-86a34ddf702c,Namespace:calico-system,Attempt:0,}" Jul 6 23:59:56.888332 kubelet[3195]: I0706 23:59:56.887631 3195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb" path="/var/lib/kubelet/pods/2f2d9ccd-ed3f-4f7d-92e5-c508b756bfdb/volumes" Jul 6 23:59:57.004545 (udev-worker)[4641]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:59:57.017453 systemd-networkd[1816]: cali684d280c3e1: Link UP Jul 6 23:59:57.018254 systemd-networkd[1816]: cali684d280c3e1: Gained carrier Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.889 [INFO][4789] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.900 [INFO][4789] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0 whisker-69d69f759- calico-system 9d8f8675-847a-473f-9615-86a34ddf702c 954 0 2025-07-06 23:59:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69d69f759 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-21-95 whisker-69d69f759-8mmlz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali684d280c3e1 [] [] }} ContainerID="d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" Namespace="calico-system" Pod="whisker-69d69f759-8mmlz" WorkloadEndpoint="ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.900 [INFO][4789] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" Namespace="calico-system" Pod="whisker-69d69f759-8mmlz" WorkloadEndpoint="ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.931 [INFO][4801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" HandleID="k8s-pod-network.d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" Workload="ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.931 [INFO][4801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" HandleID="k8s-pod-network.d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" Workload="ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-95", "pod":"whisker-69d69f759-8mmlz", "timestamp":"2025-07-06 23:59:56.931025473 +0000 UTC"}, Hostname:"ip-172-31-21-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.931 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.931 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.931 [INFO][4801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-95' Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.944 [INFO][4801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" host="ip-172-31-21-95" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.962 [INFO][4801] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-95" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.969 [INFO][4801] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ip-172-31-21-95" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.972 [INFO][4801] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.975 [INFO][4801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.975 [INFO][4801] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" host="ip-172-31-21-95" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.977 [INFO][4801] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8 Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.985 [INFO][4801] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" host="ip-172-31-21-95" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.991 [INFO][4801] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.129/26] block=192.168.15.128/26 handle="k8s-pod-network.d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" host="ip-172-31-21-95" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.992 [INFO][4801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.129/26] handle="k8s-pod-network.d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" host="ip-172-31-21-95" Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.992 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:57.037574 containerd[1975]: 2025-07-06 23:59:56.992 [INFO][4801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.129/26] IPv6=[] ContainerID="d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" HandleID="k8s-pod-network.d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" Workload="ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0" Jul 6 23:59:57.038581 containerd[1975]: 2025-07-06 23:59:56.995 [INFO][4789] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" Namespace="calico-system" Pod="whisker-69d69f759-8mmlz" WorkloadEndpoint="ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0", GenerateName:"whisker-69d69f759-", Namespace:"calico-system", SelfLink:"", UID:"9d8f8675-847a-473f-9615-86a34ddf702c", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69d69f759", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"", Pod:"whisker-69d69f759-8mmlz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali684d280c3e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:57.038581 containerd[1975]: 2025-07-06 23:59:56.995 [INFO][4789] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.129/32] ContainerID="d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" Namespace="calico-system" Pod="whisker-69d69f759-8mmlz" WorkloadEndpoint="ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0" Jul 6 23:59:57.038581 containerd[1975]: 2025-07-06 23:59:56.995 [INFO][4789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali684d280c3e1 ContainerID="d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" Namespace="calico-system" Pod="whisker-69d69f759-8mmlz" WorkloadEndpoint="ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0" Jul 6 23:59:57.038581 containerd[1975]: 2025-07-06 23:59:57.014 [INFO][4789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" Namespace="calico-system" Pod="whisker-69d69f759-8mmlz" WorkloadEndpoint="ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0" Jul 6 23:59:57.038581 containerd[1975]: 2025-07-06 23:59:57.019 [INFO][4789] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" Namespace="calico-system" Pod="whisker-69d69f759-8mmlz" WorkloadEndpoint="ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0", GenerateName:"whisker-69d69f759-", Namespace:"calico-system", SelfLink:"", UID:"9d8f8675-847a-473f-9615-86a34ddf702c", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69d69f759", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8", Pod:"whisker-69d69f759-8mmlz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali684d280c3e1", MAC:"ea:00:a8:1b:28:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:57.038581 containerd[1975]: 2025-07-06 23:59:57.033 [INFO][4789] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8" Namespace="calico-system" Pod="whisker-69d69f759-8mmlz" WorkloadEndpoint="ip--172--31--21--95-k8s-whisker--69d69f759--8mmlz-eth0" Jul 6 23:59:57.066815 containerd[1975]: time="2025-07-06T23:59:57.066544638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:57.066815 containerd[1975]: time="2025-07-06T23:59:57.066597060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:57.066815 containerd[1975]: time="2025-07-06T23:59:57.066608285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:57.069189 containerd[1975]: time="2025-07-06T23:59:57.068665659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:57.106109 systemd[1]: Started cri-containerd-d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8.scope - libcontainer container d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8. Jul 6 23:59:57.172784 containerd[1975]: time="2025-07-06T23:59:57.172372442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69d69f759-8mmlz,Uid:9d8f8675-847a-473f-9615-86a34ddf702c,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8\"" Jul 6 23:59:57.179029 containerd[1975]: time="2025-07-06T23:59:57.178990887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:59:57.688361 kubelet[3195]: I0706 23:59:57.688049 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:59:58.229047 systemd-networkd[1816]: cali684d280c3e1: Gained IPv6LL Jul 6 23:59:58.582006 systemd[1]: Started sshd@9-172.31.21.95:22-147.75.109.163:40880.service - OpenSSH per-connection server daemon (147.75.109.163:40880). Jul 6 23:59:58.623448 containerd[1975]: time="2025-07-06T23:59:58.623386385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:58.624503 containerd[1975]: time="2025-07-06T23:59:58.624412950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 6 23:59:58.627895 containerd[1975]: time="2025-07-06T23:59:58.626518844Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:58.631893 containerd[1975]: time="2025-07-06T23:59:58.629422387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:58.631893 containerd[1975]: time="2025-07-06T23:59:58.630120178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.451086456s" Jul 6 23:59:58.631893 containerd[1975]: time="2025-07-06T23:59:58.630156388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 6 23:59:58.712103 containerd[1975]: time="2025-07-06T23:59:58.712043633Z" level=info msg="CreateContainer within sandbox \"d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:59:58.753608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227961734.mount: Deactivated successfully. Jul 6 23:59:58.757503 containerd[1975]: time="2025-07-06T23:59:58.757455214Z" level=info msg="CreateContainer within sandbox \"d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"140bfe2513f376b1d5e1f965446cdcb2d7ce696ec4c5160da1384b8f73a188fb\"" Jul 6 23:59:58.761893 containerd[1975]: time="2025-07-06T23:59:58.758357612Z" level=info msg="StartContainer for \"140bfe2513f376b1d5e1f965446cdcb2d7ce696ec4c5160da1384b8f73a188fb\"" Jul 6 23:59:58.851953 systemd[1]: Started cri-containerd-140bfe2513f376b1d5e1f965446cdcb2d7ce696ec4c5160da1384b8f73a188fb.scope - libcontainer container 140bfe2513f376b1d5e1f965446cdcb2d7ce696ec4c5160da1384b8f73a188fb. Jul 6 23:59:58.880618 sshd[4889]: Accepted publickey for core from 147.75.109.163 port 40880 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:59:58.889008 containerd[1975]: time="2025-07-06T23:59:58.887315870Z" level=info msg="StopPodSandbox for \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\"" Jul 6 23:59:58.888661 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:58.903937 systemd-logind[1952]: New session 10 of user core. Jul 6 23:59:58.909328 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:59:59.066110 containerd[1975]: time="2025-07-06T23:59:59.065851436Z" level=info msg="StartContainer for \"140bfe2513f376b1d5e1f965446cdcb2d7ce696ec4c5160da1384b8f73a188fb\" returns successfully" Jul 6 23:59:59.075167 containerd[1975]: time="2025-07-06T23:59:59.073661250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.000 [INFO][4942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.000 [INFO][4942] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" iface="eth0" netns="/var/run/netns/cni-e61aa4a9-5ed1-2b69-73e4-8b4dabbc2f94" Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.000 [INFO][4942] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" iface="eth0" netns="/var/run/netns/cni-e61aa4a9-5ed1-2b69-73e4-8b4dabbc2f94" Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.002 [INFO][4942] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" iface="eth0" netns="/var/run/netns/cni-e61aa4a9-5ed1-2b69-73e4-8b4dabbc2f94" Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.003 [INFO][4942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.003 [INFO][4942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.132 [INFO][4954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" HandleID="k8s-pod-network.c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.133 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.133 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.141 [WARNING][4954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" HandleID="k8s-pod-network.c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.142 [INFO][4954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" HandleID="k8s-pod-network.c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.146 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:59.155755 containerd[1975]: 2025-07-06 23:59:59.150 [INFO][4942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 6 23:59:59.158891 containerd[1975]: time="2025-07-06T23:59:59.156958172Z" level=info msg="TearDown network for sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\" successfully" Jul 6 23:59:59.158891 containerd[1975]: time="2025-07-06T23:59:59.156996032Z" level=info msg="StopPodSandbox for \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\" returns successfully" Jul 6 23:59:59.158891 containerd[1975]: time="2025-07-06T23:59:59.158016780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6868664579-646k8,Uid:f5549cc7-b328-4c09-b9a8-a657f9c3b244,Namespace:calico-system,Attempt:1,}" Jul 6 23:59:59.163514 systemd[1]: run-netns-cni\x2de61aa4a9\x2d5ed1\x2d2b69\x2d73e4\x2d8b4dabbc2f94.mount: Deactivated successfully. Jul 6 23:59:59.479714 systemd-networkd[1816]: cali5aed4505c37: Link UP Jul 6 23:59:59.480059 systemd-networkd[1816]: cali5aed4505c37: Gained carrier Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.258 [INFO][4979] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.277 [INFO][4979] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0 calico-kube-controllers-6868664579- calico-system f5549cc7-b328-4c09-b9a8-a657f9c3b244 1000 0 2025-07-06 23:59:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6868664579 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-21-95 calico-kube-controllers-6868664579-646k8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5aed4505c37 [] [] }} ContainerID="1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" Namespace="calico-system" Pod="calico-kube-controllers-6868664579-646k8" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.277 [INFO][4979] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" Namespace="calico-system" Pod="calico-kube-controllers-6868664579-646k8" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.372 [INFO][4996] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" HandleID="k8s-pod-network.1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.373 [INFO][4996] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" HandleID="k8s-pod-network.1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd990), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-95", "pod":"calico-kube-controllers-6868664579-646k8", "timestamp":"2025-07-06 23:59:59.371441673 +0000 UTC"}, Hostname:"ip-172-31-21-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.373 [INFO][4996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.373 [INFO][4996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.373 [INFO][4996] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-95' Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.388 [INFO][4996] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" host="ip-172-31-21-95" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.404 [INFO][4996] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-95" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.415 [INFO][4996] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ip-172-31-21-95" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.420 [INFO][4996] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.429 [INFO][4996] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.430 [INFO][4996] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" host="ip-172-31-21-95" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.436 [INFO][4996] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.449 [INFO][4996] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" host="ip-172-31-21-95" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.462 [INFO][4996] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.130/26] block=192.168.15.128/26 handle="k8s-pod-network.1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" host="ip-172-31-21-95" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.462 [INFO][4996] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.130/26] handle="k8s-pod-network.1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" host="ip-172-31-21-95" Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.462 [INFO][4996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:59.521910 containerd[1975]: 2025-07-06 23:59:59.462 [INFO][4996] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.130/26] IPv6=[] ContainerID="1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" HandleID="k8s-pod-network.1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 6 23:59:59.526733 containerd[1975]: 2025-07-06 23:59:59.467 [INFO][4979] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" Namespace="calico-system" Pod="calico-kube-controllers-6868664579-646k8" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0", GenerateName:"calico-kube-controllers-6868664579-", Namespace:"calico-system", SelfLink:"", UID:"f5549cc7-b328-4c09-b9a8-a657f9c3b244", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6868664579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"", Pod:"calico-kube-controllers-6868664579-646k8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aed4505c37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:59.526733 containerd[1975]: 2025-07-06 23:59:59.468 [INFO][4979] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.130/32] ContainerID="1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" Namespace="calico-system" Pod="calico-kube-controllers-6868664579-646k8" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 6 23:59:59.526733 containerd[1975]: 2025-07-06 23:59:59.468 [INFO][4979] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5aed4505c37 ContainerID="1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" Namespace="calico-system" Pod="calico-kube-controllers-6868664579-646k8" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 6 23:59:59.526733 containerd[1975]: 2025-07-06 23:59:59.481 [INFO][4979] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" Namespace="calico-system" Pod="calico-kube-controllers-6868664579-646k8" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 6 23:59:59.526733 containerd[1975]: 2025-07-06 23:59:59.485 [INFO][4979] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" Namespace="calico-system" Pod="calico-kube-controllers-6868664579-646k8" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0", GenerateName:"calico-kube-controllers-6868664579-", Namespace:"calico-system", SelfLink:"", UID:"f5549cc7-b328-4c09-b9a8-a657f9c3b244", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6868664579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe", Pod:"calico-kube-controllers-6868664579-646k8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aed4505c37", MAC:"86:08:8b:13:52:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:59.526733 containerd[1975]: 2025-07-06 23:59:59.515 [INFO][4979] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe" Namespace="calico-system" Pod="calico-kube-controllers-6868664579-646k8" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 6 23:59:59.593377 containerd[1975]: time="2025-07-06T23:59:59.592851178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:59.593377 containerd[1975]: time="2025-07-06T23:59:59.592946226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:59.593377 containerd[1975]: time="2025-07-06T23:59:59.592970719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:59.593377 containerd[1975]: time="2025-07-06T23:59:59.593129380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:59.641465 systemd[1]: Started cri-containerd-1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe.scope - libcontainer container 1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe. Jul 6 23:59:59.749809 containerd[1975]: time="2025-07-06T23:59:59.749687707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6868664579-646k8,Uid:f5549cc7-b328-4c09-b9a8-a657f9c3b244,Namespace:calico-system,Attempt:1,} returns sandbox id \"1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe\"" Jul 6 23:59:59.803898 kernel: bpftool[5085]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:59:59.883688 containerd[1975]: time="2025-07-06T23:59:59.883631123Z" level=info msg="StopPodSandbox for \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\"" Jul 6 23:59:59.952921 sshd[4889]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:59.961282 systemd[1]: sshd@9-172.31.21.95:22-147.75.109.163:40880.service: Deactivated successfully. Jul 6 23:59:59.963502 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:59:59.965882 systemd-logind[1952]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:59:59.970497 systemd-logind[1952]: Removed session 10. Jul 7 00:00:00.031798 containerd[1975]: 2025-07-06 23:59:59.972 [INFO][5105] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:00.031798 containerd[1975]: 2025-07-06 23:59:59.973 [INFO][5105] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" iface="eth0" netns="/var/run/netns/cni-c28e94e1-4365-8351-7d37-23be789b3113" Jul 7 00:00:00.031798 containerd[1975]: 2025-07-06 23:59:59.975 [INFO][5105] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" iface="eth0" netns="/var/run/netns/cni-c28e94e1-4365-8351-7d37-23be789b3113" Jul 7 00:00:00.031798 containerd[1975]: 2025-07-06 23:59:59.975 [INFO][5105] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" iface="eth0" netns="/var/run/netns/cni-c28e94e1-4365-8351-7d37-23be789b3113" Jul 7 00:00:00.031798 containerd[1975]: 2025-07-06 23:59:59.975 [INFO][5105] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:00.031798 containerd[1975]: 2025-07-06 23:59:59.976 [INFO][5105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:00.031798 containerd[1975]: 2025-07-07 00:00:00.011 [INFO][5115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" HandleID="k8s-pod-network.a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:00.031798 containerd[1975]: 2025-07-07 00:00:00.011 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:00.031798 containerd[1975]: 2025-07-07 00:00:00.011 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:00.031798 containerd[1975]: 2025-07-07 00:00:00.022 [WARNING][5115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" HandleID="k8s-pod-network.a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:00.031798 containerd[1975]: 2025-07-07 00:00:00.022 [INFO][5115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" HandleID="k8s-pod-network.a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:00.031798 containerd[1975]: 2025-07-07 00:00:00.025 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:00.031798 containerd[1975]: 2025-07-07 00:00:00.028 [INFO][5105] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:00.038685 containerd[1975]: time="2025-07-07T00:00:00.031945369Z" level=info msg="TearDown network for sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\" successfully" Jul 7 00:00:00.038685 containerd[1975]: time="2025-07-07T00:00:00.031978098Z" level=info msg="StopPodSandbox for \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\" returns successfully" Jul 7 00:00:00.039466 containerd[1975]: time="2025-07-07T00:00:00.039415111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5rnc5,Uid:19ccd617-9252-4320-ae1c-b3a2be4963b2,Namespace:calico-system,Attempt:1,}" Jul 7 00:00:00.043694 systemd[1]: run-netns-cni\x2dc28e94e1\x2d4365\x2d8351\x2d7d37\x2d23be789b3113.mount: Deactivated successfully. Jul 7 00:00:00.057562 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jul 7 00:00:00.070524 systemd[1]: Starting mdadm.service - Initiates a check run of an MD array's redundancy information.... Jul 7 00:00:00.138454 systemd[1]: logrotate.service: Deactivated successfully. Jul 7 00:00:00.173050 systemd[1]: mdadm.service: Deactivated successfully. Jul 7 00:00:00.173520 systemd[1]: Finished mdadm.service - Initiates a check run of an MD array's redundancy information.. Jul 7 00:00:00.499990 (udev-worker)[4650]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:00:00.521897 systemd-networkd[1816]: calibf215de4fd8: Link UP Jul 7 00:00:00.522272 systemd-networkd[1816]: calibf215de4fd8: Gained carrier Jul 7 00:00:00.536395 systemd-networkd[1816]: vxlan.calico: Link UP Jul 7 00:00:00.536410 systemd-networkd[1816]: vxlan.calico: Gained carrier Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.266 [INFO][5138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0 goldmane-768f4c5c69- calico-system 19ccd617-9252-4320-ae1c-b3a2be4963b2 1012 0 2025-07-06 23:59:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-21-95 goldmane-768f4c5c69-5rnc5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibf215de4fd8 [] [] }} ContainerID="1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" Namespace="calico-system" Pod="goldmane-768f4c5c69-5rnc5" WorkloadEndpoint="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.266 [INFO][5138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" Namespace="calico-system" Pod="goldmane-768f4c5c69-5rnc5" WorkloadEndpoint="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.321 [INFO][5154] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" HandleID="k8s-pod-network.1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.322 [INFO][5154] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" HandleID="k8s-pod-network.1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5690), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-95", "pod":"goldmane-768f4c5c69-5rnc5", "timestamp":"2025-07-07 00:00:00.321757439 +0000 UTC"}, Hostname:"ip-172-31-21-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.322 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.322 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.322 [INFO][5154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-95' Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.341 [INFO][5154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" host="ip-172-31-21-95" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.364 [INFO][5154] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-95" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.379 [INFO][5154] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.386 [INFO][5154] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.390 [INFO][5154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.390 [INFO][5154] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" host="ip-172-31-21-95" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.393 [INFO][5154] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25 Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.402 [INFO][5154] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" host="ip-172-31-21-95" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.422 [INFO][5154] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.131/26] block=192.168.15.128/26 handle="k8s-pod-network.1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" host="ip-172-31-21-95" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.422 [INFO][5154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.131/26] handle="k8s-pod-network.1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" host="ip-172-31-21-95" Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.422 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:00.585741 containerd[1975]: 2025-07-07 00:00:00.423 [INFO][5154] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.131/26] IPv6=[] ContainerID="1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" HandleID="k8s-pod-network.1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:00.587240 containerd[1975]: 2025-07-07 00:00:00.471 [INFO][5138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" Namespace="calico-system" Pod="goldmane-768f4c5c69-5rnc5" WorkloadEndpoint="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"19ccd617-9252-4320-ae1c-b3a2be4963b2", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"", Pod:"goldmane-768f4c5c69-5rnc5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf215de4fd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:00.587240 containerd[1975]: 2025-07-07 00:00:00.471 [INFO][5138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.131/32] ContainerID="1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" Namespace="calico-system" Pod="goldmane-768f4c5c69-5rnc5" WorkloadEndpoint="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:00.587240 containerd[1975]: 2025-07-07 00:00:00.471 [INFO][5138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf215de4fd8 ContainerID="1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" Namespace="calico-system" Pod="goldmane-768f4c5c69-5rnc5" WorkloadEndpoint="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:00.587240 containerd[1975]: 2025-07-07 00:00:00.522 [INFO][5138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" Namespace="calico-system" Pod="goldmane-768f4c5c69-5rnc5" WorkloadEndpoint="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:00.587240 containerd[1975]: 2025-07-07 00:00:00.526 [INFO][5138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" Namespace="calico-system" Pod="goldmane-768f4c5c69-5rnc5" WorkloadEndpoint="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"19ccd617-9252-4320-ae1c-b3a2be4963b2", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25", Pod:"goldmane-768f4c5c69-5rnc5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf215de4fd8", MAC:"7e:9c:0e:ad:dc:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:00.587240 containerd[1975]: 2025-07-07 00:00:00.578 [INFO][5138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25" Namespace="calico-system" Pod="goldmane-768f4c5c69-5rnc5" WorkloadEndpoint="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:00.679507 containerd[1975]: time="2025-07-07T00:00:00.674917502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:00.679507 containerd[1975]: time="2025-07-07T00:00:00.679129145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:00.679507 containerd[1975]: time="2025-07-07T00:00:00.679297224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:00.682166 containerd[1975]: time="2025-07-07T00:00:00.680686662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:00.777535 systemd[1]: Started cri-containerd-1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25.scope - libcontainer container 1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25. Jul 7 00:00:00.898804 containerd[1975]: time="2025-07-07T00:00:00.898752078Z" level=info msg="StopPodSandbox for \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\"" Jul 7 00:00:00.917505 containerd[1975]: time="2025-07-07T00:00:00.916668406Z" level=info msg="StopPodSandbox for \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\"" Jul 7 00:00:00.921801 containerd[1975]: time="2025-07-07T00:00:00.921332561Z" level=info msg="StopPodSandbox for \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\"" Jul 7 00:00:00.981215 systemd-networkd[1816]: cali5aed4505c37: Gained IPv6LL Jul 7 00:00:01.503118 containerd[1975]: time="2025-07-07T00:00:01.501228350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5rnc5,Uid:19ccd617-9252-4320-ae1c-b3a2be4963b2,Namespace:calico-system,Attempt:1,} returns sandbox id \"1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25\"" Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.309 [INFO][5251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.310 [INFO][5251] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" iface="eth0" netns="/var/run/netns/cni-a7c9c1f5-2b78-61d7-5e6b-efba0a6067f6" Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.319 [INFO][5251] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" iface="eth0" netns="/var/run/netns/cni-a7c9c1f5-2b78-61d7-5e6b-efba0a6067f6" Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.319 [INFO][5251] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" iface="eth0" netns="/var/run/netns/cni-a7c9c1f5-2b78-61d7-5e6b-efba0a6067f6" Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.320 [INFO][5251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.320 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.512 [INFO][5285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" HandleID="k8s-pod-network.adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.513 [INFO][5285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.514 [INFO][5285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.533 [WARNING][5285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" HandleID="k8s-pod-network.adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.533 [INFO][5285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" HandleID="k8s-pod-network.adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.539 [INFO][5285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:01.576954 containerd[1975]: 2025-07-07 00:00:01.545 [INFO][5251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:01.577622 containerd[1975]: time="2025-07-07T00:00:01.576999826Z" level=info msg="TearDown network for sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\" successfully" Jul 7 00:00:01.577622 containerd[1975]: time="2025-07-07T00:00:01.577035605Z" level=info msg="StopPodSandbox for \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\" returns successfully" Jul 7 00:00:01.586698 systemd[1]: run-netns-cni\x2da7c9c1f5\x2d2b78\x2d61d7\x2d5e6b\x2defba0a6067f6.mount: Deactivated successfully. Jul 7 00:00:01.591081 containerd[1975]: time="2025-07-07T00:00:01.590767660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8484c8784c-78zl4,Uid:e6136ec6-ffc6-441a-9474-e2f8829c266e,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:00:01.947548 systemd-networkd[1816]: calibf215de4fd8: Gained IPv6LL Jul 7 00:00:01.993030 containerd[1975]: time="2025-07-07T00:00:01.980223631Z" level=info msg="StopPodSandbox for \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\"" Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.252 [INFO][5255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.253 [INFO][5255] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" iface="eth0" netns="/var/run/netns/cni-5d941a74-a118-2b1b-8989-f19fd40821ff" Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.254 [INFO][5255] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" iface="eth0" netns="/var/run/netns/cni-5d941a74-a118-2b1b-8989-f19fd40821ff" Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.257 [INFO][5255] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" iface="eth0" netns="/var/run/netns/cni-5d941a74-a118-2b1b-8989-f19fd40821ff" Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.257 [INFO][5255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.257 [INFO][5255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.572 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" HandleID="k8s-pod-network.3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.573 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.574 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.780 [WARNING][5273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" HandleID="k8s-pod-network.3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.780 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" HandleID="k8s-pod-network.3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:01.882 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:02.319436 containerd[1975]: 2025-07-07 00:00:02.085 [INFO][5255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:02.352781 containerd[1975]: time="2025-07-07T00:00:02.342966936Z" level=info msg="TearDown network for sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\" successfully" Jul 7 00:00:02.352781 containerd[1975]: time="2025-07-07T00:00:02.343097844Z" level=info msg="StopPodSandbox for \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\" returns successfully" Jul 7 00:00:02.388911 containerd[1975]: time="2025-07-07T00:00:02.377033845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2s8cg,Uid:93b5c195-f2cb-4978-9046-bbb50dfd5a25,Namespace:kube-system,Attempt:1,}" Jul 7 00:00:02.392117 systemd-networkd[1816]: vxlan.calico: Gained IPv6LL Jul 7 00:00:02.400782 systemd[1]: run-netns-cni\x2d5d941a74\x2da118\x2d2b1b\x2d8989\x2df19fd40821ff.mount: Deactivated successfully. Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:01.256 [INFO][5247] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:01.262 [INFO][5247] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" iface="eth0" netns="/var/run/netns/cni-21e8ab41-09bc-fd07-96e2-7cfd3e9ffdfa" Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:01.266 [INFO][5247] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" iface="eth0" netns="/var/run/netns/cni-21e8ab41-09bc-fd07-96e2-7cfd3e9ffdfa" Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:01.266 [INFO][5247] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" iface="eth0" netns="/var/run/netns/cni-21e8ab41-09bc-fd07-96e2-7cfd3e9ffdfa" Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:01.266 [INFO][5247] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:01.266 [INFO][5247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:01.743 [INFO][5278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" HandleID="k8s-pod-network.df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:01.743 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:01.882 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:02.392 [WARNING][5278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" HandleID="k8s-pod-network.df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:02.412 [INFO][5278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" HandleID="k8s-pod-network.df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:02.478 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:02.571387 containerd[1975]: 2025-07-07 00:00:02.543 [INFO][5247] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:02.601857 containerd[1975]: time="2025-07-07T00:00:02.571385897Z" level=info msg="TearDown network for sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\" successfully" Jul 7 00:00:02.601857 containerd[1975]: time="2025-07-07T00:00:02.571431793Z" level=info msg="StopPodSandbox for \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\" returns successfully" Jul 7 00:00:02.620673 systemd[1]: run-netns-cni\x2d21e8ab41\x2d09bc\x2dfd07\x2d96e2\x2d7cfd3e9ffdfa.mount: Deactivated successfully. Jul 7 00:00:02.631894 containerd[1975]: time="2025-07-07T00:00:02.621723576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78dd578d87-r8llj,Uid:09c6b849-d9f2-457c-9d21-c2403e3bc700,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:00:03.119838 containerd[1975]: time="2025-07-07T00:00:03.117471267Z" level=info msg="StopPodSandbox for \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\"" Jul 7 00:00:03.901241 containerd[1975]: time="2025-07-07T00:00:03.900730040Z" level=info msg="StopPodSandbox for \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\"" Jul 7 00:00:04.210599 kubelet[3195]: I0707 00:00:04.210562 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:00:05.086237 systemd[1]: Started sshd@10-172.31.21.95:22-147.75.109.163:40888.service - OpenSSH per-connection server daemon (147.75.109.163:40888). Jul 7 00:00:05.281378 systemd[1]: run-containerd-runc-k8s.io-389de9738b6dd6c4cf1e58967c70986343b93dc4db688d04e182e532315bc447-runc.J52aEA.mount: Deactivated successfully. Jul 7 00:00:05.602075 sshd[5445]: Accepted publickey for core from 147.75.109.163 port 40888 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:05.607526 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:04.296 [INFO][5326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:04.296 [INFO][5326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" iface="eth0" netns="/var/run/netns/cni-8ed65110-65b0-c92c-8d6f-e8d1fc3a486e" Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:04.298 [INFO][5326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" iface="eth0" netns="/var/run/netns/cni-8ed65110-65b0-c92c-8d6f-e8d1fc3a486e" Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:04.299 [INFO][5326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" iface="eth0" netns="/var/run/netns/cni-8ed65110-65b0-c92c-8d6f-e8d1fc3a486e" Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:04.299 [INFO][5326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:04.299 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:05.288 [INFO][5401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" HandleID="k8s-pod-network.1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:05.288 [INFO][5401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:05.289 [INFO][5401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:05.445 [WARNING][5401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" HandleID="k8s-pod-network.1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:05.445 [INFO][5401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" HandleID="k8s-pod-network.1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:05.483 [INFO][5401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:05.623588 containerd[1975]: 2025-07-07 00:00:05.556 [INFO][5326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:05.654661 containerd[1975]: time="2025-07-07T00:00:05.651963737Z" level=info msg="TearDown network for sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\" successfully" Jul 7 00:00:05.654661 containerd[1975]: time="2025-07-07T00:00:05.652020786Z" level=info msg="StopPodSandbox for \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\" returns successfully" Jul 7 00:00:05.654861 containerd[1975]: time="2025-07-07T00:00:05.654772808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lrnkv,Uid:6b84e45b-9676-47c1-bdf6-d1f78bd2c24a,Namespace:calico-system,Attempt:1,}" Jul 7 00:00:05.672745 systemd-logind[1952]: New session 11 of user core. Jul 7 00:00:05.679141 systemd[1]: run-netns-cni\x2d8ed65110\x2d65b0\x2dc92c\x2d8d6f\x2de8d1fc3a486e.mount: Deactivated successfully. Jul 7 00:00:05.766035 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:00:06.397991 systemd-networkd[1816]: cali0dfade95f97: Link UP Jul 7 00:00:06.398339 systemd-networkd[1816]: cali0dfade95f97: Gained carrier Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:04.034 [INFO][5311] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0 calico-apiserver-8484c8784c- calico-apiserver e6136ec6-ffc6-441a-9474-e2f8829c266e 1027 0 2025-07-06 23:59:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8484c8784c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-95 calico-apiserver-8484c8784c-78zl4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0dfade95f97 [] [] }} ContainerID="3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-78zl4" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:04.064 [INFO][5311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-78zl4" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:05.600 [INFO][5399] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" HandleID="k8s-pod-network.3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:05.600 [INFO][5399] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" HandleID="k8s-pod-network.3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bbf70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-95", "pod":"calico-apiserver-8484c8784c-78zl4", "timestamp":"2025-07-07 00:00:05.575468309 +0000 UTC"}, Hostname:"ip-172-31-21-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:05.600 [INFO][5399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:05.600 [INFO][5399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:05.600 [INFO][5399] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-95' Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:05.801 [INFO][5399] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" host="ip-172-31-21-95" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:05.868 [INFO][5399] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-95" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:05.992 [INFO][5399] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:06.086 [INFO][5399] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:06.156 [INFO][5399] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:06.156 [INFO][5399] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" host="ip-172-31-21-95" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:06.179 [INFO][5399] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:06.208 [INFO][5399] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" host="ip-172-31-21-95" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:06.303 [INFO][5399] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.132/26] block=192.168.15.128/26 handle="k8s-pod-network.3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" host="ip-172-31-21-95" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:06.304 [INFO][5399] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.132/26] handle="k8s-pod-network.3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" host="ip-172-31-21-95" Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:06.320 [INFO][5399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:06.541280 containerd[1975]: 2025-07-07 00:00:06.321 [INFO][5399] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.132/26] IPv6=[] ContainerID="3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" HandleID="k8s-pod-network.3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:06.544907 containerd[1975]: 2025-07-07 00:00:06.371 [INFO][5311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-78zl4" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0", GenerateName:"calico-apiserver-8484c8784c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6136ec6-ffc6-441a-9474-e2f8829c266e", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8484c8784c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"", Pod:"calico-apiserver-8484c8784c-78zl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0dfade95f97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:06.544907 containerd[1975]: 2025-07-07 00:00:06.375 [INFO][5311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.132/32] ContainerID="3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-78zl4" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:06.544907 containerd[1975]: 2025-07-07 00:00:06.375 [INFO][5311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dfade95f97 ContainerID="3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-78zl4" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:06.544907 containerd[1975]: 2025-07-07 00:00:06.400 [INFO][5311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-78zl4" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:06.544907 containerd[1975]: 2025-07-07 00:00:06.401 [INFO][5311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-78zl4" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0", GenerateName:"calico-apiserver-8484c8784c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6136ec6-ffc6-441a-9474-e2f8829c266e", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8484c8784c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a", Pod:"calico-apiserver-8484c8784c-78zl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0dfade95f97", MAC:"96:02:15:b7:f8:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:06.544907 containerd[1975]: 2025-07-07 00:00:06.512 [INFO][5311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-78zl4" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:07.007346 systemd-networkd[1816]: cali478539d4343: Link UP Jul 7 00:00:07.011355 systemd-networkd[1816]: cali478539d4343: Gained carrier Jul 7 00:00:07.039999 containerd[1975]: time="2025-07-07T00:00:07.039778979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:07.039999 containerd[1975]: time="2025-07-07T00:00:07.039952864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:07.041168 containerd[1975]: time="2025-07-07T00:00:07.040438406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:07.041881 containerd[1975]: time="2025-07-07T00:00:07.041344058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:04.301 [INFO][5332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0 coredns-674b8bbfcf- kube-system 93b5c195-f2cb-4978-9046-bbb50dfd5a25 1025 0 2025-07-06 23:59:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-95 coredns-674b8bbfcf-2s8cg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali478539d4343 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" Namespace="kube-system" Pod="coredns-674b8bbfcf-2s8cg" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:04.324 [INFO][5332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" Namespace="kube-system" Pod="coredns-674b8bbfcf-2s8cg" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:05.601 [INFO][5411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" HandleID="k8s-pod-network.04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:05.605 [INFO][5411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" HandleID="k8s-pod-network.04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002accf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-95", "pod":"coredns-674b8bbfcf-2s8cg", "timestamp":"2025-07-07 00:00:05.601474407 +0000 UTC"}, Hostname:"ip-172-31-21-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:05.605 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.317 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.318 [INFO][5411] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-95' Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.422 [INFO][5411] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" host="ip-172-31-21-95" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.519 [INFO][5411] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-95" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.611 [INFO][5411] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.638 [INFO][5411] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.686 [INFO][5411] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.703 [INFO][5411] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" host="ip-172-31-21-95" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.712 [INFO][5411] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1 Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.777 [INFO][5411] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" host="ip-172-31-21-95" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.845 [INFO][5411] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.133/26] block=192.168.15.128/26 handle="k8s-pod-network.04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" host="ip-172-31-21-95" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.848 [INFO][5411] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.133/26] handle="k8s-pod-network.04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" host="ip-172-31-21-95" Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.848 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:07.119240 containerd[1975]: 2025-07-07 00:00:06.849 [INFO][5411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.133/26] IPv6=[] ContainerID="04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" HandleID="k8s-pod-network.04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:07.122543 containerd[1975]: 2025-07-07 00:00:06.940 [INFO][5332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" Namespace="kube-system" Pod="coredns-674b8bbfcf-2s8cg" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"93b5c195-f2cb-4978-9046-bbb50dfd5a25", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"", Pod:"coredns-674b8bbfcf-2s8cg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali478539d4343", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:07.122543 containerd[1975]: 2025-07-07 00:00:06.942 [INFO][5332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.133/32] ContainerID="04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" Namespace="kube-system" Pod="coredns-674b8bbfcf-2s8cg" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:07.122543 containerd[1975]: 2025-07-07 00:00:06.942 [INFO][5332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali478539d4343 ContainerID="04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" Namespace="kube-system" Pod="coredns-674b8bbfcf-2s8cg" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:07.122543 containerd[1975]: 2025-07-07 00:00:07.010 [INFO][5332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" Namespace="kube-system" Pod="coredns-674b8bbfcf-2s8cg" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:07.122543 containerd[1975]: 2025-07-07 00:00:07.016 [INFO][5332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" Namespace="kube-system" Pod="coredns-674b8bbfcf-2s8cg" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"93b5c195-f2cb-4978-9046-bbb50dfd5a25", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1", Pod:"coredns-674b8bbfcf-2s8cg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali478539d4343", MAC:"26:3b:15:c4:58:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:07.122543 containerd[1975]: 2025-07-07 00:00:07.093 [INFO][5332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1" Namespace="kube-system" Pod="coredns-674b8bbfcf-2s8cg" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:07.124346 systemd[1]: Started cri-containerd-3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a.scope - libcontainer container 3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a. Jul 7 00:00:07.291372 containerd[1975]: time="2025-07-07T00:00:07.290464907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:07.307918 systemd-networkd[1816]: cali691761c7e36: Link UP Jul 7 00:00:07.312285 systemd-networkd[1816]: cali691761c7e36: Gained carrier Jul 7 00:00:07.328046 containerd[1975]: time="2025-07-07T00:00:07.293074009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:07.328046 containerd[1975]: time="2025-07-07T00:00:07.327255297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:07.328046 containerd[1975]: time="2025-07-07T00:00:07.327524512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:04.916 [INFO][5366] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:04.916 [INFO][5366] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" iface="eth0" netns="/var/run/netns/cni-156990ee-32bf-da5a-1c75-b42217e1956d" Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:05.024 [INFO][5366] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" iface="eth0" netns="/var/run/netns/cni-156990ee-32bf-da5a-1c75-b42217e1956d" Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:05.113 [INFO][5366] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" iface="eth0" netns="/var/run/netns/cni-156990ee-32bf-da5a-1c75-b42217e1956d" Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:05.113 [INFO][5366] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:05.113 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:06.135 [INFO][5454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" HandleID="k8s-pod-network.186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:06.135 [INFO][5454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:07.249 [INFO][5454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:07.317 [WARNING][5454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" HandleID="k8s-pod-network.186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:07.317 [INFO][5454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" HandleID="k8s-pod-network.186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:07.349 [INFO][5454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:07.373489 containerd[1975]: 2025-07-07 00:00:07.359 [INFO][5366] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:07.376156 containerd[1975]: time="2025-07-07T00:00:07.376024596Z" level=info msg="TearDown network for sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\" successfully" Jul 7 00:00:07.376156 containerd[1975]: time="2025-07-07T00:00:07.376071745Z" level=info msg="StopPodSandbox for \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\" returns successfully" Jul 7 00:00:07.382086 containerd[1975]: time="2025-07-07T00:00:07.381859900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7m765,Uid:a11ff9fd-e988-4620-8c05-f0bff4ac262f,Namespace:kube-system,Attempt:1,}" Jul 7 00:00:07.383535 systemd[1]: run-netns-cni\x2d156990ee\x2d32bf\x2dda5a\x2d1c75\x2db42217e1956d.mount: Deactivated successfully. Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:04.521 [INFO][5345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0 calico-apiserver-78dd578d87- calico-apiserver 09c6b849-d9f2-457c-9d21-c2403e3bc700 1026 0 2025-07-06 23:59:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78dd578d87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-95 calico-apiserver-78dd578d87-r8llj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali691761c7e36 [] [] }} ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-r8llj" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:04.565 [INFO][5345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-r8llj" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:05.632 [INFO][5426] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:05.632 [INFO][5426] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002318b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-95", "pod":"calico-apiserver-78dd578d87-r8llj", "timestamp":"2025-07-07 00:00:05.632668948 +0000 UTC"}, Hostname:"ip-172-31-21-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:05.632 [INFO][5426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:06.863 [INFO][5426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:06.864 [INFO][5426] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-95' Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:06.918 [INFO][5426] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" host="ip-172-31-21-95" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:07.025 [INFO][5426] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-95" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:07.073 [INFO][5426] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:07.101 [INFO][5426] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:07.141 [INFO][5426] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:07.143 [INFO][5426] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" host="ip-172-31-21-95" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:07.189 [INFO][5426] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:07.220 [INFO][5426] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" host="ip-172-31-21-95" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:07.245 [INFO][5426] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.134/26] block=192.168.15.128/26 handle="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" host="ip-172-31-21-95" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:07.246 [INFO][5426] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.134/26] handle="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" host="ip-172-31-21-95" Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:07.246 [INFO][5426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:07.422336 containerd[1975]: 2025-07-07 00:00:07.246 [INFO][5426] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.134/26] IPv6=[] ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:07.424177 containerd[1975]: 2025-07-07 00:00:07.287 [INFO][5345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-r8llj" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0", GenerateName:"calico-apiserver-78dd578d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"09c6b849-d9f2-457c-9d21-c2403e3bc700", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78dd578d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"", Pod:"calico-apiserver-78dd578d87-r8llj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali691761c7e36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:07.424177 containerd[1975]: 2025-07-07 00:00:07.290 [INFO][5345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.134/32] ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-r8llj" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:07.424177 containerd[1975]: 2025-07-07 00:00:07.292 [INFO][5345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali691761c7e36 ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-r8llj" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:07.424177 containerd[1975]: 2025-07-07 00:00:07.333 [INFO][5345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-r8llj" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:07.424177 containerd[1975]: 2025-07-07 00:00:07.343 [INFO][5345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-r8llj" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0", GenerateName:"calico-apiserver-78dd578d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"09c6b849-d9f2-457c-9d21-c2403e3bc700", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78dd578d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce", Pod:"calico-apiserver-78dd578d87-r8llj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali691761c7e36", MAC:"a2:e5:65:75:3f:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:07.424177 containerd[1975]: 2025-07-07 00:00:07.407 [INFO][5345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-r8llj" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:07.476374 systemd[1]: Started cri-containerd-04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1.scope - libcontainer container 04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1. Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:05.864 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:05.864 [INFO][5397] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" iface="eth0" netns="/var/run/netns/cni-e4292e7d-cc27-a5dd-4024-5a83188753ab" Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:05.865 [INFO][5397] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" iface="eth0" netns="/var/run/netns/cni-e4292e7d-cc27-a5dd-4024-5a83188753ab" Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:05.866 [INFO][5397] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" iface="eth0" netns="/var/run/netns/cni-e4292e7d-cc27-a5dd-4024-5a83188753ab" Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:05.866 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:05.866 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:06.416 [INFO][5498] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" HandleID="k8s-pod-network.ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:06.420 [INFO][5498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:07.355 [INFO][5498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:07.422 [WARNING][5498] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" HandleID="k8s-pod-network.ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:07.422 [INFO][5498] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" HandleID="k8s-pod-network.ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:07.433 [INFO][5498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:07.479570 containerd[1975]: 2025-07-07 00:00:07.455 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:07.482149 containerd[1975]: time="2025-07-07T00:00:07.481651450Z" level=info msg="TearDown network for sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\" successfully" Jul 7 00:00:07.482149 containerd[1975]: time="2025-07-07T00:00:07.481917574Z" level=info msg="StopPodSandbox for \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\" returns successfully" Jul 7 00:00:07.483777 containerd[1975]: time="2025-07-07T00:00:07.482804886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78dd578d87-hbf8l,Uid:4073ee90-8739-4135-b438-25bdb06e58b4,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:00:07.565472 sshd[5445]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:07.583149 systemd[1]: sshd@10-172.31.21.95:22-147.75.109.163:40888.service: Deactivated successfully. Jul 7 00:00:07.589763 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:00:07.593355 systemd-logind[1952]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:00:07.599979 systemd-logind[1952]: Removed session 11. Jul 7 00:00:07.762529 systemd-networkd[1816]: cali2b18ed9e7b2: Link UP Jul 7 00:00:07.766024 systemd-networkd[1816]: cali2b18ed9e7b2: Gained carrier Jul 7 00:00:07.795435 containerd[1975]: time="2025-07-07T00:00:07.787439626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:07.795435 containerd[1975]: time="2025-07-07T00:00:07.787505701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:07.795435 containerd[1975]: time="2025-07-07T00:00:07.787532193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:07.795435 containerd[1975]: time="2025-07-07T00:00:07.787664366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:06.391 [INFO][5481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0 csi-node-driver- calico-system 6b84e45b-9676-47c1-bdf6-d1f78bd2c24a 1044 0 2025-07-06 23:59:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-21-95 csi-node-driver-lrnkv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2b18ed9e7b2 [] [] }} ContainerID="b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" Namespace="calico-system" Pod="csi-node-driver-lrnkv" WorkloadEndpoint="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:06.391 [INFO][5481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" Namespace="calico-system" Pod="csi-node-driver-lrnkv" WorkloadEndpoint="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.033 [INFO][5522] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" HandleID="k8s-pod-network.b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.034 [INFO][5522] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" HandleID="k8s-pod-network.b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003993b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-95", "pod":"csi-node-driver-lrnkv", "timestamp":"2025-07-07 00:00:07.033274007 +0000 UTC"}, Hostname:"ip-172-31-21-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.034 [INFO][5522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.434 [INFO][5522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.434 [INFO][5522] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-95' Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.499 [INFO][5522] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" host="ip-172-31-21-95" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.550 [INFO][5522] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-95" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.582 [INFO][5522] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.590 [INFO][5522] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.623 [INFO][5522] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.623 [INFO][5522] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" host="ip-172-31-21-95" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.643 [INFO][5522] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.670 [INFO][5522] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" host="ip-172-31-21-95" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.703 [INFO][5522] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.135/26] block=192.168.15.128/26 handle="k8s-pod-network.b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" host="ip-172-31-21-95" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.703 [INFO][5522] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.135/26] handle="k8s-pod-network.b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" host="ip-172-31-21-95" Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.703 [INFO][5522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:07.827942 containerd[1975]: 2025-07-07 00:00:07.703 [INFO][5522] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.135/26] IPv6=[] ContainerID="b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" HandleID="k8s-pod-network.b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:07.829078 containerd[1975]: 2025-07-07 00:00:07.744 [INFO][5481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" Namespace="calico-system" Pod="csi-node-driver-lrnkv" WorkloadEndpoint="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b84e45b-9676-47c1-bdf6-d1f78bd2c24a", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"", Pod:"csi-node-driver-lrnkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b18ed9e7b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:07.829078 containerd[1975]: 2025-07-07 00:00:07.744 [INFO][5481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.135/32] ContainerID="b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" Namespace="calico-system" Pod="csi-node-driver-lrnkv" WorkloadEndpoint="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:07.829078 containerd[1975]: 2025-07-07 00:00:07.744 [INFO][5481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b18ed9e7b2 ContainerID="b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" Namespace="calico-system" Pod="csi-node-driver-lrnkv" WorkloadEndpoint="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:07.829078 containerd[1975]: 2025-07-07 00:00:07.767 [INFO][5481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" Namespace="calico-system" Pod="csi-node-driver-lrnkv" WorkloadEndpoint="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:07.829078 containerd[1975]: 2025-07-07 00:00:07.768 [INFO][5481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" Namespace="calico-system" Pod="csi-node-driver-lrnkv" WorkloadEndpoint="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b84e45b-9676-47c1-bdf6-d1f78bd2c24a", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b", Pod:"csi-node-driver-lrnkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b18ed9e7b2", MAC:"9a:b1:d1:3e:2f:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:07.829078 containerd[1975]: 2025-07-07 00:00:07.803 [INFO][5481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b" Namespace="calico-system" Pod="csi-node-driver-lrnkv" WorkloadEndpoint="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:07.839855 containerd[1975]: time="2025-07-07T00:00:07.839511028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2s8cg,Uid:93b5c195-f2cb-4978-9046-bbb50dfd5a25,Namespace:kube-system,Attempt:1,} returns sandbox id \"04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1\"" Jul 7 00:00:07.893156 systemd[1]: Started cri-containerd-a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce.scope - libcontainer container a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce. Jul 7 00:00:07.908260 containerd[1975]: time="2025-07-07T00:00:07.908188663Z" level=info msg="CreateContainer within sandbox \"04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:00:08.024409 containerd[1975]: time="2025-07-07T00:00:08.024368370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8484c8784c-78zl4,Uid:e6136ec6-ffc6-441a-9474-e2f8829c266e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a\"" Jul 7 00:00:08.045180 containerd[1975]: time="2025-07-07T00:00:08.045137787Z" level=info msg="CreateContainer within sandbox \"04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"457d89da0f9b9a74145e97fbfc987fab4c29e2f4675e0abd7df959db5d54ca61\"" Jul 7 00:00:08.055989 containerd[1975]: time="2025-07-07T00:00:08.052947504Z" level=info msg="StartContainer for \"457d89da0f9b9a74145e97fbfc987fab4c29e2f4675e0abd7df959db5d54ca61\"" Jul 7 00:00:08.063580 systemd[1]: run-netns-cni\x2de4292e7d\x2dcc27\x2da5dd\x2d4024\x2d5a83188753ab.mount: Deactivated successfully. Jul 7 00:00:08.115405 containerd[1975]: time="2025-07-07T00:00:08.113839754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:08.115405 containerd[1975]: time="2025-07-07T00:00:08.113928194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:08.115405 containerd[1975]: time="2025-07-07T00:00:08.113998947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:08.115405 containerd[1975]: time="2025-07-07T00:00:08.114142507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:08.256592 systemd[1]: Started cri-containerd-b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b.scope - libcontainer container b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b. Jul 7 00:00:08.303197 containerd[1975]: time="2025-07-07T00:00:08.302524211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78dd578d87-r8llj,Uid:09c6b849-d9f2-457c-9d21-c2403e3bc700,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\"" Jul 7 00:00:08.302747 systemd[1]: Started cri-containerd-457d89da0f9b9a74145e97fbfc987fab4c29e2f4675e0abd7df959db5d54ca61.scope - libcontainer container 457d89da0f9b9a74145e97fbfc987fab4c29e2f4675e0abd7df959db5d54ca61. Jul 7 00:00:08.410998 systemd-networkd[1816]: cali478539d4343: Gained IPv6LL Jul 7 00:00:08.411349 systemd-networkd[1816]: cali0dfade95f97: Gained IPv6LL Jul 7 00:00:08.446329 systemd-networkd[1816]: calid7b233dadea: Link UP Jul 7 00:00:08.451514 systemd-networkd[1816]: calid7b233dadea: Gained carrier Jul 7 00:00:08.513639 containerd[1975]: time="2025-07-07T00:00:08.511384480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lrnkv,Uid:6b84e45b-9676-47c1-bdf6-d1f78bd2c24a,Namespace:calico-system,Attempt:1,} returns sandbox id \"b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b\"" Jul 7 00:00:08.517805 containerd[1975]: time="2025-07-07T00:00:08.517735169Z" level=info msg="StartContainer for \"457d89da0f9b9a74145e97fbfc987fab4c29e2f4675e0abd7df959db5d54ca61\" returns successfully" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:07.862 [INFO][5663] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0 calico-apiserver-78dd578d87- calico-apiserver 4073ee90-8739-4135-b438-25bdb06e58b4 1050 0 2025-07-06 23:59:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78dd578d87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-95 calico-apiserver-78dd578d87-hbf8l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid7b233dadea [] [] }} ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-hbf8l" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:07.867 [INFO][5663] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-hbf8l" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.145 [INFO][5721] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.147 [INFO][5721] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000397400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-95", "pod":"calico-apiserver-78dd578d87-hbf8l", "timestamp":"2025-07-07 00:00:08.143958172 +0000 UTC"}, Hostname:"ip-172-31-21-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.147 [INFO][5721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.147 [INFO][5721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.147 [INFO][5721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-95' Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.200 [INFO][5721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" host="ip-172-31-21-95" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.217 [INFO][5721] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-95" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.262 [INFO][5721] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.281 [INFO][5721] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.295 [INFO][5721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.300 [INFO][5721] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" host="ip-172-31-21-95" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.308 [INFO][5721] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643 Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.343 [INFO][5721] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" host="ip-172-31-21-95" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.374 [INFO][5721] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.136/26] block=192.168.15.128/26 handle="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" host="ip-172-31-21-95" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.374 [INFO][5721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.136/26] handle="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" host="ip-172-31-21-95" Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.375 [INFO][5721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:08.528592 containerd[1975]: 2025-07-07 00:00:08.375 [INFO][5721] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.136/26] IPv6=[] ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:08.529613 containerd[1975]: 2025-07-07 00:00:08.391 [INFO][5663] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-hbf8l" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0", GenerateName:"calico-apiserver-78dd578d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"4073ee90-8739-4135-b438-25bdb06e58b4", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78dd578d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"", Pod:"calico-apiserver-78dd578d87-hbf8l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7b233dadea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:08.529613 containerd[1975]: 2025-07-07 00:00:08.396 [INFO][5663] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.136/32] ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-hbf8l" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:08.529613 containerd[1975]: 2025-07-07 00:00:08.398 [INFO][5663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7b233dadea ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-hbf8l" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:08.529613 containerd[1975]: 2025-07-07 00:00:08.468 [INFO][5663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-hbf8l" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:08.529613 containerd[1975]: 2025-07-07 00:00:08.472 [INFO][5663] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-hbf8l" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0", GenerateName:"calico-apiserver-78dd578d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"4073ee90-8739-4135-b438-25bdb06e58b4", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78dd578d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643", Pod:"calico-apiserver-78dd578d87-hbf8l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7b233dadea", MAC:"d2:16:e4:94:41:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:08.529613 containerd[1975]: 2025-07-07 00:00:08.508 [INFO][5663] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Namespace="calico-apiserver" Pod="calico-apiserver-78dd578d87-hbf8l" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:08.618329 systemd-networkd[1816]: cali18074825c5d: Link UP Jul 7 00:00:08.627441 systemd-networkd[1816]: cali18074825c5d: Gained carrier Jul 7 00:00:08.670261 containerd[1975]: time="2025-07-07T00:00:08.661268648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:08.670261 containerd[1975]: time="2025-07-07T00:00:08.661388072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:08.670261 containerd[1975]: time="2025-07-07T00:00:08.661412984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:08.670261 containerd[1975]: time="2025-07-07T00:00:08.664579149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:07.811 [INFO][5639] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0 coredns-674b8bbfcf- kube-system a11ff9fd-e988-4620-8c05-f0bff4ac262f 1046 0 2025-07-06 23:59:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-95 coredns-674b8bbfcf-7m765 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18074825c5d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" Namespace="kube-system" Pod="coredns-674b8bbfcf-7m765" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:07.812 [INFO][5639] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" Namespace="kube-system" Pod="coredns-674b8bbfcf-7m765" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.210 [INFO][5710] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" HandleID="k8s-pod-network.762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.210 [INFO][5710] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" HandleID="k8s-pod-network.762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000348fc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-95", "pod":"coredns-674b8bbfcf-7m765", "timestamp":"2025-07-07 00:00:08.210540521 +0000 UTC"}, Hostname:"ip-172-31-21-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.210 [INFO][5710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.376 [INFO][5710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.379 [INFO][5710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-95' Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.441 [INFO][5710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" host="ip-172-31-21-95" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.484 [INFO][5710] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-95" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.523 [INFO][5710] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.536 [INFO][5710] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.547 [INFO][5710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.547 [INFO][5710] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" host="ip-172-31-21-95" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.554 [INFO][5710] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592 Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.570 [INFO][5710] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" host="ip-172-31-21-95" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.593 [INFO][5710] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.137/26] block=192.168.15.128/26 handle="k8s-pod-network.762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" host="ip-172-31-21-95" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.593 [INFO][5710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.137/26] handle="k8s-pod-network.762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" host="ip-172-31-21-95" Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.593 [INFO][5710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:08.680398 containerd[1975]: 2025-07-07 00:00:08.593 [INFO][5710] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.137/26] IPv6=[] ContainerID="762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" HandleID="k8s-pod-network.762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:08.681935 containerd[1975]: 2025-07-07 00:00:08.607 [INFO][5639] cni-plugin/k8s.go 418: Populated endpoint ContainerID="762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" Namespace="kube-system" Pod="coredns-674b8bbfcf-7m765" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a11ff9fd-e988-4620-8c05-f0bff4ac262f", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"", Pod:"coredns-674b8bbfcf-7m765", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18074825c5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:08.681935 containerd[1975]: 2025-07-07 00:00:08.607 [INFO][5639] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.137/32] ContainerID="762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" Namespace="kube-system" Pod="coredns-674b8bbfcf-7m765" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:08.681935 containerd[1975]: 2025-07-07 00:00:08.607 [INFO][5639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18074825c5d ContainerID="762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" Namespace="kube-system" Pod="coredns-674b8bbfcf-7m765" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:08.681935 containerd[1975]: 2025-07-07 00:00:08.634 [INFO][5639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" Namespace="kube-system" Pod="coredns-674b8bbfcf-7m765" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:08.681935 containerd[1975]: 2025-07-07 00:00:08.635 [INFO][5639] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" Namespace="kube-system" Pod="coredns-674b8bbfcf-7m765" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a11ff9fd-e988-4620-8c05-f0bff4ac262f", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592", Pod:"coredns-674b8bbfcf-7m765", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18074825c5d", MAC:"a6:60:a8:b3:b0:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:08.681935 containerd[1975]: 2025-07-07 00:00:08.662 [INFO][5639] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592" Namespace="kube-system" Pod="coredns-674b8bbfcf-7m765" WorkloadEndpoint="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:08.712315 systemd[1]: Started cri-containerd-4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643.scope - libcontainer container 4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643. Jul 7 00:00:08.725975 systemd-networkd[1816]: cali691761c7e36: Gained IPv6LL Jul 7 00:00:08.782106 containerd[1975]: time="2025-07-07T00:00:08.781660373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:08.782106 containerd[1975]: time="2025-07-07T00:00:08.781733102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:08.782106 containerd[1975]: time="2025-07-07T00:00:08.781749492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:08.782106 containerd[1975]: time="2025-07-07T00:00:08.781861712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:08.789165 systemd-networkd[1816]: cali2b18ed9e7b2: Gained IPv6LL Jul 7 00:00:08.826553 systemd[1]: Started cri-containerd-762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592.scope - libcontainer container 762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592. Jul 7 00:00:09.019283 kubelet[3195]: I0707 00:00:08.995103 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2s8cg" podStartSLOduration=49.99507218 podStartE2EDuration="49.99507218s" podCreationTimestamp="2025-07-06 23:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:08.994558392 +0000 UTC m=+56.266486740" watchObservedRunningTime="2025-07-07 00:00:08.99507218 +0000 UTC m=+56.267000528" Jul 7 00:00:09.034330 containerd[1975]: time="2025-07-07T00:00:09.033017705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7m765,Uid:a11ff9fd-e988-4620-8c05-f0bff4ac262f,Namespace:kube-system,Attempt:1,} returns sandbox id \"762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592\"" Jul 7 00:00:09.065072 containerd[1975]: time="2025-07-07T00:00:09.064709058Z" level=info msg="CreateContainer within sandbox \"762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:00:09.110548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3287277399.mount: Deactivated successfully. Jul 7 00:00:09.112884 containerd[1975]: time="2025-07-07T00:00:09.112509845Z" level=info msg="CreateContainer within sandbox \"762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d5ad8468edf0ed14947fbd1e6043108564f83f88c08843db743bea2a0059b06\"" Jul 7 00:00:09.114648 containerd[1975]: time="2025-07-07T00:00:09.114537459Z" level=info msg="StartContainer for \"1d5ad8468edf0ed14947fbd1e6043108564f83f88c08843db743bea2a0059b06\"" Jul 7 00:00:09.142092 containerd[1975]: time="2025-07-07T00:00:09.142035257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78dd578d87-hbf8l,Uid:4073ee90-8739-4135-b438-25bdb06e58b4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\"" Jul 7 00:00:09.246175 systemd[1]: Started cri-containerd-1d5ad8468edf0ed14947fbd1e6043108564f83f88c08843db743bea2a0059b06.scope - libcontainer container 1d5ad8468edf0ed14947fbd1e6043108564f83f88c08843db743bea2a0059b06. Jul 7 00:00:09.350001 containerd[1975]: time="2025-07-07T00:00:09.349186941Z" level=info msg="StartContainer for \"1d5ad8468edf0ed14947fbd1e6043108564f83f88c08843db743bea2a0059b06\" returns successfully" Jul 7 00:00:09.749743 systemd-networkd[1816]: cali18074825c5d: Gained IPv6LL Jul 7 00:00:09.877917 containerd[1975]: time="2025-07-07T00:00:09.830911498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:00:09.877917 containerd[1975]: time="2025-07-07T00:00:09.876953980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 10.803242309s" Jul 7 00:00:09.877917 containerd[1975]: time="2025-07-07T00:00:09.877023404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:00:09.884851 containerd[1975]: time="2025-07-07T00:00:09.881211821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:00:09.895676 containerd[1975]: time="2025-07-07T00:00:09.895623283Z" level=info msg="CreateContainer within sandbox \"d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:00:09.913494 containerd[1975]: time="2025-07-07T00:00:09.913442879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:09.923423 containerd[1975]: time="2025-07-07T00:00:09.923377191Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:09.925323 containerd[1975]: time="2025-07-07T00:00:09.924535916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:09.932723 containerd[1975]: time="2025-07-07T00:00:09.932683044Z" level=info msg="CreateContainer within sandbox \"d3718d69671b75d757dbe8795b638d7344c9a616b741ae8df0f05936d7de97c8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6b9c0e0fb82a6d7b08f5ce9479fb7b13633b995978a34f16b159c37090b64cb0\"" Jul 7 00:00:09.934891 containerd[1975]: time="2025-07-07T00:00:09.933556640Z" level=info msg="StartContainer for \"6b9c0e0fb82a6d7b08f5ce9479fb7b13633b995978a34f16b159c37090b64cb0\"" Jul 7 00:00:10.000899 systemd[1]: Started cri-containerd-6b9c0e0fb82a6d7b08f5ce9479fb7b13633b995978a34f16b159c37090b64cb0.scope - libcontainer container 6b9c0e0fb82a6d7b08f5ce9479fb7b13633b995978a34f16b159c37090b64cb0. Jul 7 00:00:10.059410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1969637465.mount: Deactivated successfully. Jul 7 00:00:10.068698 kubelet[3195]: I0707 00:00:10.066619 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7m765" podStartSLOduration=51.066591213 podStartE2EDuration="51.066591213s" podCreationTimestamp="2025-07-06 23:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:10.006772084 +0000 UTC m=+57.278700431" watchObservedRunningTime="2025-07-07 00:00:10.066591213 +0000 UTC m=+57.338519561" Jul 7 00:00:10.133117 systemd-networkd[1816]: calid7b233dadea: Gained IPv6LL Jul 7 00:00:10.172472 containerd[1975]: time="2025-07-07T00:00:10.172426036Z" level=info msg="StartContainer for \"6b9c0e0fb82a6d7b08f5ce9479fb7b13633b995978a34f16b159c37090b64cb0\" returns successfully" Jul 7 00:00:11.008384 kubelet[3195]: I0707 00:00:11.007771 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-69d69f759-8mmlz" podStartSLOduration=2.303530309 podStartE2EDuration="15.007754413s" podCreationTimestamp="2025-07-06 23:59:56 +0000 UTC" firstStartedPulling="2025-07-06 23:59:57.17500514 +0000 UTC m=+44.446933470" lastFinishedPulling="2025-07-07 00:00:09.879229236 +0000 UTC m=+57.151157574" observedRunningTime="2025-07-07 00:00:11.007528972 +0000 UTC m=+58.279457323" watchObservedRunningTime="2025-07-07 00:00:11.007754413 +0000 UTC m=+58.279682751" Jul 7 00:00:12.602201 systemd[1]: Started sshd@11-172.31.21.95:22-147.75.109.163:59344.service - OpenSSH per-connection server daemon (147.75.109.163:59344). Jul 7 00:00:12.824378 sshd[6034]: Accepted publickey for core from 147.75.109.163 port 59344 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:12.828190 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:12.836116 systemd-logind[1952]: New session 12 of user core. Jul 7 00:00:12.844116 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:00:13.008241 ntpd[1944]: Listen normally on 7 vxlan.calico 192.168.15.128:123 Jul 7 00:00:13.010950 ntpd[1944]: 7 Jul 00:00:13 ntpd[1944]: Listen normally on 7 vxlan.calico 192.168.15.128:123 Jul 7 00:00:13.010950 ntpd[1944]: 7 Jul 00:00:13 ntpd[1944]: Listen normally on 8 cali684d280c3e1 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 7 00:00:13.010950 ntpd[1944]: 7 Jul 00:00:13 ntpd[1944]: Listen normally on 9 cali5aed4505c37 [fe80::ecee:eeff:feee:eeee%5]:123 Jul 7 00:00:13.010950 ntpd[1944]: 7 Jul 00:00:13 ntpd[1944]: Listen normally on 10 vxlan.calico [fe80::647c:a1ff:fe52:cb2f%6]:123 Jul 7 00:00:13.010950 ntpd[1944]: 7 Jul 00:00:13 ntpd[1944]: Listen normally on 11 calibf215de4fd8 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 7 00:00:13.010950 ntpd[1944]: 7 Jul 00:00:13 ntpd[1944]: Listen normally on 12 cali0dfade95f97 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 7 00:00:13.010950 ntpd[1944]: 7 Jul 00:00:13 ntpd[1944]: Listen normally on 13 cali478539d4343 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 7 00:00:13.010950 ntpd[1944]: 7 Jul 00:00:13 ntpd[1944]: Listen normally on 14 cali691761c7e36 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 7 00:00:13.010950 ntpd[1944]: 7 Jul 00:00:13 ntpd[1944]: Listen normally on 15 cali2b18ed9e7b2 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 7 00:00:13.010950 ntpd[1944]: 7 Jul 00:00:13 ntpd[1944]: Listen normally on 16 calid7b233dadea [fe80::ecee:eeff:feee:eeee%14]:123 Jul 7 00:00:13.010950 ntpd[1944]: 7 Jul 00:00:13 ntpd[1944]: Listen normally on 17 cali18074825c5d [fe80::ecee:eeff:feee:eeee%15]:123 Jul 7 00:00:13.008332 ntpd[1944]: Listen normally on 8 cali684d280c3e1 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 7 00:00:13.008388 ntpd[1944]: Listen normally on 9 cali5aed4505c37 [fe80::ecee:eeff:feee:eeee%5]:123 Jul 7 00:00:13.008431 ntpd[1944]: Listen normally on 10 vxlan.calico [fe80::647c:a1ff:fe52:cb2f%6]:123 Jul 7 00:00:13.008469 ntpd[1944]: Listen normally on 11 calibf215de4fd8 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 7 00:00:13.008507 ntpd[1944]: Listen normally on 12 cali0dfade95f97 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 7 00:00:13.008545 ntpd[1944]: Listen normally on 13 cali478539d4343 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 7 00:00:13.008583 ntpd[1944]: Listen normally on 14 cali691761c7e36 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 7 00:00:13.008633 ntpd[1944]: Listen normally on 15 cali2b18ed9e7b2 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 7 00:00:13.008669 ntpd[1944]: Listen normally on 16 calid7b233dadea [fe80::ecee:eeff:feee:eeee%14]:123 Jul 7 00:00:13.008706 ntpd[1944]: Listen normally on 17 cali18074825c5d [fe80::ecee:eeff:feee:eeee%15]:123 Jul 7 00:00:13.126539 containerd[1975]: time="2025-07-07T00:00:13.126159006Z" level=info msg="StopPodSandbox for \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\"" Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.320 [WARNING][6051] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b84e45b-9676-47c1-bdf6-d1f78bd2c24a", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b", Pod:"csi-node-driver-lrnkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b18ed9e7b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.320 [INFO][6051] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.320 [INFO][6051] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" iface="eth0" netns="" Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.320 [INFO][6051] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.320 [INFO][6051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.382 [INFO][6063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" HandleID="k8s-pod-network.1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.383 [INFO][6063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.383 [INFO][6063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.393 [WARNING][6063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" HandleID="k8s-pod-network.1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.393 [INFO][6063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" HandleID="k8s-pod-network.1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.398 [INFO][6063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:13.406361 containerd[1975]: 2025-07-07 00:00:13.401 [INFO][6051] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:13.409096 containerd[1975]: time="2025-07-07T00:00:13.409049052Z" level=info msg="TearDown network for sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\" successfully" Jul 7 00:00:13.409096 containerd[1975]: time="2025-07-07T00:00:13.409091853Z" level=info msg="StopPodSandbox for \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\" returns successfully" Jul 7 00:00:13.481397 containerd[1975]: time="2025-07-07T00:00:13.481320294Z" level=info msg="RemovePodSandbox for \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\"" Jul 7 00:00:13.489069 containerd[1975]: time="2025-07-07T00:00:13.488300875Z" level=info msg="Forcibly stopping sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\"" Jul 7 00:00:13.551562 sshd[6034]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:13.558364 systemd[1]: sshd@11-172.31.21.95:22-147.75.109.163:59344.service: Deactivated successfully. Jul 7 00:00:13.562050 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:00:13.564533 systemd-logind[1952]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:00:13.566678 systemd-logind[1952]: Removed session 12. Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.555 [WARNING][6081] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b84e45b-9676-47c1-bdf6-d1f78bd2c24a", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b", Pod:"csi-node-driver-lrnkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b18ed9e7b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.555 [INFO][6081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.555 [INFO][6081] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" iface="eth0" netns="" Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.555 [INFO][6081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.555 [INFO][6081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.588 [INFO][6088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" HandleID="k8s-pod-network.1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.588 [INFO][6088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.588 [INFO][6088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.598 [WARNING][6088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" HandleID="k8s-pod-network.1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.598 [INFO][6088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" HandleID="k8s-pod-network.1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Workload="ip--172--31--21--95-k8s-csi--node--driver--lrnkv-eth0" Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.599 [INFO][6088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:13.606202 containerd[1975]: 2025-07-07 00:00:13.601 [INFO][6081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3" Jul 7 00:00:13.606202 containerd[1975]: time="2025-07-07T00:00:13.606203607Z" level=info msg="TearDown network for sandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\" successfully" Jul 7 00:00:13.646413 containerd[1975]: time="2025-07-07T00:00:13.646321065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:00:13.658852 containerd[1975]: time="2025-07-07T00:00:13.657966945Z" level=info msg="RemovePodSandbox \"1f1c9df6fb6a31232cc4a8203ff37ecf518ecccf1c1a4cd10f17dbe0f78f39a3\" returns successfully" Jul 7 00:00:13.668254 containerd[1975]: time="2025-07-07T00:00:13.668211049Z" level=info msg="StopPodSandbox for \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\"" Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.707 [WARNING][6104] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0", GenerateName:"calico-apiserver-78dd578d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"4073ee90-8739-4135-b438-25bdb06e58b4", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78dd578d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643", Pod:"calico-apiserver-78dd578d87-hbf8l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7b233dadea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.707 [INFO][6104] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.707 [INFO][6104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" iface="eth0" netns="" Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.707 [INFO][6104] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.707 [INFO][6104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.733 [INFO][6112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" HandleID="k8s-pod-network.ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.734 [INFO][6112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.734 [INFO][6112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.741 [WARNING][6112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" HandleID="k8s-pod-network.ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.741 [INFO][6112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" HandleID="k8s-pod-network.ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.743 [INFO][6112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:13.747823 containerd[1975]: 2025-07-07 00:00:13.745 [INFO][6104] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:13.748683 containerd[1975]: time="2025-07-07T00:00:13.747912298Z" level=info msg="TearDown network for sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\" successfully" Jul 7 00:00:13.748683 containerd[1975]: time="2025-07-07T00:00:13.747943443Z" level=info msg="StopPodSandbox for \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\" returns successfully" Jul 7 00:00:13.748683 containerd[1975]: time="2025-07-07T00:00:13.748458286Z" level=info msg="RemovePodSandbox for \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\"" Jul 7 00:00:13.748683 containerd[1975]: time="2025-07-07T00:00:13.748492502Z" level=info msg="Forcibly stopping sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\"" Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.787 [WARNING][6126] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0", GenerateName:"calico-apiserver-78dd578d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"4073ee90-8739-4135-b438-25bdb06e58b4", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78dd578d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643", Pod:"calico-apiserver-78dd578d87-hbf8l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7b233dadea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.787 [INFO][6126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.787 [INFO][6126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" iface="eth0" netns="" Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.787 [INFO][6126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.787 [INFO][6126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.812 [INFO][6133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" HandleID="k8s-pod-network.ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.812 [INFO][6133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.812 [INFO][6133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.819 [WARNING][6133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" HandleID="k8s-pod-network.ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.819 [INFO][6133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" HandleID="k8s-pod-network.ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.821 [INFO][6133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:13.825073 containerd[1975]: 2025-07-07 00:00:13.823 [INFO][6126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154" Jul 7 00:00:13.827478 containerd[1975]: time="2025-07-07T00:00:13.825112872Z" level=info msg="TearDown network for sandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\" successfully" Jul 7 00:00:13.831519 containerd[1975]: time="2025-07-07T00:00:13.831443626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:00:13.831519 containerd[1975]: time="2025-07-07T00:00:13.831506472Z" level=info msg="RemovePodSandbox \"ba695b717fc2c65b9c33815473fc5ebb165ce77d0d6fb92359e64f9b0fcaa154\" returns successfully" Jul 7 00:00:13.832028 containerd[1975]: time="2025-07-07T00:00:13.831992521Z" level=info msg="StopPodSandbox for \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\"" Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.876 [WARNING][6147] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a11ff9fd-e988-4620-8c05-f0bff4ac262f", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592", Pod:"coredns-674b8bbfcf-7m765", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18074825c5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.878 [INFO][6147] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.878 [INFO][6147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" iface="eth0" netns="" Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.878 [INFO][6147] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.878 [INFO][6147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.916 [INFO][6154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" HandleID="k8s-pod-network.186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.917 [INFO][6154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.917 [INFO][6154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.926 [WARNING][6154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" HandleID="k8s-pod-network.186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.927 [INFO][6154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" HandleID="k8s-pod-network.186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.934 [INFO][6154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:13.940008 containerd[1975]: 2025-07-07 00:00:13.937 [INFO][6147] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:13.941537 containerd[1975]: time="2025-07-07T00:00:13.940248359Z" level=info msg="TearDown network for sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\" successfully" Jul 7 00:00:13.941537 containerd[1975]: time="2025-07-07T00:00:13.940282686Z" level=info msg="StopPodSandbox for \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\" returns successfully" Jul 7 00:00:13.942649 containerd[1975]: time="2025-07-07T00:00:13.942616111Z" level=info msg="RemovePodSandbox for \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\"" Jul 7 00:00:13.942758 containerd[1975]: time="2025-07-07T00:00:13.942656940Z" level=info msg="Forcibly stopping sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\"" Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.013 [WARNING][6170] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a11ff9fd-e988-4620-8c05-f0bff4ac262f", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"762902ec33b31331472eba8c217fa365b84372ea63bed5c039d7fb19a868d592", Pod:"coredns-674b8bbfcf-7m765", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18074825c5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.014 [INFO][6170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.014 [INFO][6170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" iface="eth0" netns="" Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.014 [INFO][6170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.014 [INFO][6170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.056 [INFO][6177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" HandleID="k8s-pod-network.186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.056 [INFO][6177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.056 [INFO][6177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.079 [WARNING][6177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" HandleID="k8s-pod-network.186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.079 [INFO][6177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" HandleID="k8s-pod-network.186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--7m765-eth0" Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.085 [INFO][6177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:14.119359 containerd[1975]: 2025-07-07 00:00:14.096 [INFO][6170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5" Jul 7 00:00:14.119359 containerd[1975]: time="2025-07-07T00:00:14.119196451Z" level=info msg="TearDown network for sandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\" successfully" Jul 7 00:00:14.131510 containerd[1975]: time="2025-07-07T00:00:14.131199265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:00:14.132408 containerd[1975]: time="2025-07-07T00:00:14.132065567Z" level=info msg="RemovePodSandbox \"186f23745d7c43edc8fb5c07f752319a9e7fe92906e701f484472c72358a41a5\" returns successfully" Jul 7 00:00:14.135862 containerd[1975]: time="2025-07-07T00:00:14.135800211Z" level=info msg="StopPodSandbox for \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\"" Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.187 [WARNING][6194] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" WorkloadEndpoint="ip--172--31--21--95-k8s-whisker--5475cbb56f--7hvwg-eth0" Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.187 [INFO][6194] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.187 [INFO][6194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" iface="eth0" netns="" Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.187 [INFO][6194] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.187 [INFO][6194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.213 [INFO][6201] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" HandleID="k8s-pod-network.80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Workload="ip--172--31--21--95-k8s-whisker--5475cbb56f--7hvwg-eth0" Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.213 [INFO][6201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.213 [INFO][6201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.221 [WARNING][6201] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" HandleID="k8s-pod-network.80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Workload="ip--172--31--21--95-k8s-whisker--5475cbb56f--7hvwg-eth0" Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.221 [INFO][6201] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" HandleID="k8s-pod-network.80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Workload="ip--172--31--21--95-k8s-whisker--5475cbb56f--7hvwg-eth0" Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.223 [INFO][6201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:14.227468 containerd[1975]: 2025-07-07 00:00:14.225 [INFO][6194] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 7 00:00:14.228832 containerd[1975]: time="2025-07-07T00:00:14.227983673Z" level=info msg="TearDown network for sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\" successfully" Jul 7 00:00:14.228832 containerd[1975]: time="2025-07-07T00:00:14.228077085Z" level=info msg="StopPodSandbox for \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\" returns successfully" Jul 7 00:00:14.230029 containerd[1975]: time="2025-07-07T00:00:14.229701611Z" level=info msg="RemovePodSandbox for \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\"" Jul 7 00:00:14.230029 containerd[1975]: time="2025-07-07T00:00:14.229733338Z" level=info msg="Forcibly stopping sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\"" Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.268 [WARNING][6216] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" WorkloadEndpoint="ip--172--31--21--95-k8s-whisker--5475cbb56f--7hvwg-eth0" Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.268 [INFO][6216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.268 [INFO][6216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" iface="eth0" netns="" Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.268 [INFO][6216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.268 [INFO][6216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.295 [INFO][6223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" HandleID="k8s-pod-network.80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Workload="ip--172--31--21--95-k8s-whisker--5475cbb56f--7hvwg-eth0" Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.295 [INFO][6223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.295 [INFO][6223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.302 [WARNING][6223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" HandleID="k8s-pod-network.80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Workload="ip--172--31--21--95-k8s-whisker--5475cbb56f--7hvwg-eth0" Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.302 [INFO][6223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" HandleID="k8s-pod-network.80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Workload="ip--172--31--21--95-k8s-whisker--5475cbb56f--7hvwg-eth0" Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.304 [INFO][6223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:14.308938 containerd[1975]: 2025-07-07 00:00:14.306 [INFO][6216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4" Jul 7 00:00:14.308938 containerd[1975]: time="2025-07-07T00:00:14.308138907Z" level=info msg="TearDown network for sandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\" successfully" Jul 7 00:00:14.316653 containerd[1975]: time="2025-07-07T00:00:14.316596762Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:00:14.316653 containerd[1975]: time="2025-07-07T00:00:14.316657560Z" level=info msg="RemovePodSandbox \"80ceaff241e582244c4d56c4a43cf13c7f7edf7fcd839072641ce655744aedb4\" returns successfully" Jul 7 00:00:14.317184 containerd[1975]: time="2025-07-07T00:00:14.317133319Z" level=info msg="StopPodSandbox for \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\"" Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.358 [WARNING][6237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"19ccd617-9252-4320-ae1c-b3a2be4963b2", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25", Pod:"goldmane-768f4c5c69-5rnc5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf215de4fd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.359 [INFO][6237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.359 [INFO][6237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" iface="eth0" netns="" Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.359 [INFO][6237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.359 [INFO][6237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.390 [INFO][6244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" HandleID="k8s-pod-network.a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.390 [INFO][6244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.390 [INFO][6244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.399 [WARNING][6244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" HandleID="k8s-pod-network.a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.399 [INFO][6244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" HandleID="k8s-pod-network.a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.402 [INFO][6244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:14.406527 containerd[1975]: 2025-07-07 00:00:14.404 [INFO][6237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:14.407241 containerd[1975]: time="2025-07-07T00:00:14.406604688Z" level=info msg="TearDown network for sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\" successfully" Jul 7 00:00:14.407241 containerd[1975]: time="2025-07-07T00:00:14.406654639Z" level=info msg="StopPodSandbox for \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\" returns successfully" Jul 7 00:00:14.407846 containerd[1975]: time="2025-07-07T00:00:14.407803080Z" level=info msg="RemovePodSandbox for \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\"" Jul 7 00:00:14.407979 containerd[1975]: time="2025-07-07T00:00:14.407848999Z" level=info msg="Forcibly stopping sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\"" Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.448 [WARNING][6259] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"19ccd617-9252-4320-ae1c-b3a2be4963b2", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25", Pod:"goldmane-768f4c5c69-5rnc5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf215de4fd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.448 [INFO][6259] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.448 [INFO][6259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" iface="eth0" netns="" Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.448 [INFO][6259] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.448 [INFO][6259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.502 [INFO][6266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" HandleID="k8s-pod-network.a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.503 [INFO][6266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.503 [INFO][6266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.511 [WARNING][6266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" HandleID="k8s-pod-network.a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.511 [INFO][6266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" HandleID="k8s-pod-network.a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Workload="ip--172--31--21--95-k8s-goldmane--768f4c5c69--5rnc5-eth0" Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.513 [INFO][6266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:14.520001 containerd[1975]: 2025-07-07 00:00:14.515 [INFO][6259] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328" Jul 7 00:00:14.520001 containerd[1975]: time="2025-07-07T00:00:14.518972865Z" level=info msg="TearDown network for sandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\" successfully" Jul 7 00:00:14.530512 containerd[1975]: time="2025-07-07T00:00:14.526562955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:00:14.530512 containerd[1975]: time="2025-07-07T00:00:14.526633282Z" level=info msg="RemovePodSandbox \"a32e0fea45aa2a8095627fd844c42542065f7eb612775d2eaf7fd64480563328\" returns successfully" Jul 7 00:00:14.530512 containerd[1975]: time="2025-07-07T00:00:14.527147890Z" level=info msg="StopPodSandbox for \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\"" Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.590 [WARNING][6280] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"93b5c195-f2cb-4978-9046-bbb50dfd5a25", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1", Pod:"coredns-674b8bbfcf-2s8cg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali478539d4343", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.590 [INFO][6280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.590 [INFO][6280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" iface="eth0" netns="" Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.590 [INFO][6280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.590 [INFO][6280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.616 [INFO][6288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" HandleID="k8s-pod-network.3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.616 [INFO][6288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.616 [INFO][6288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.624 [WARNING][6288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" HandleID="k8s-pod-network.3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.624 [INFO][6288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" HandleID="k8s-pod-network.3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.626 [INFO][6288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:14.630067 containerd[1975]: 2025-07-07 00:00:14.628 [INFO][6280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:14.630541 containerd[1975]: time="2025-07-07T00:00:14.630097197Z" level=info msg="TearDown network for sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\" successfully" Jul 7 00:00:14.630541 containerd[1975]: time="2025-07-07T00:00:14.630119595Z" level=info msg="StopPodSandbox for \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\" returns successfully" Jul 7 00:00:14.630541 containerd[1975]: time="2025-07-07T00:00:14.630490673Z" level=info msg="RemovePodSandbox for \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\"" Jul 7 00:00:14.630541 containerd[1975]: time="2025-07-07T00:00:14.630513680Z" level=info msg="Forcibly stopping sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\"" Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.670 [WARNING][6303] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"93b5c195-f2cb-4978-9046-bbb50dfd5a25", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"04f2b9d4616b65f7aaeeae13916fb7235fa4deb93d720aafcbc3311e6fc513f1", Pod:"coredns-674b8bbfcf-2s8cg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali478539d4343", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.671 [INFO][6303] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.671 [INFO][6303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" iface="eth0" netns="" Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.671 [INFO][6303] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.671 [INFO][6303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.702 [INFO][6310] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" HandleID="k8s-pod-network.3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.702 [INFO][6310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.702 [INFO][6310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.711 [WARNING][6310] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" HandleID="k8s-pod-network.3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.711 [INFO][6310] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" HandleID="k8s-pod-network.3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Workload="ip--172--31--21--95-k8s-coredns--674b8bbfcf--2s8cg-eth0" Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.714 [INFO][6310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:14.718321 containerd[1975]: 2025-07-07 00:00:14.716 [INFO][6303] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613" Jul 7 00:00:14.719060 containerd[1975]: time="2025-07-07T00:00:14.718364841Z" level=info msg="TearDown network for sandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\" successfully" Jul 7 00:00:14.768212 containerd[1975]: time="2025-07-07T00:00:14.768153154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:00:14.769433 containerd[1975]: time="2025-07-07T00:00:14.768232414Z" level=info msg="RemovePodSandbox \"3aeec80239f167bd0fd5581702812765abe8f03fe266d6b18f0daf2656d20613\" returns successfully" Jul 7 00:00:14.769433 containerd[1975]: time="2025-07-07T00:00:14.768822075Z" level=info msg="StopPodSandbox for \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\"" Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.812 [WARNING][6324] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0", GenerateName:"calico-apiserver-8484c8784c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6136ec6-ffc6-441a-9474-e2f8829c266e", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8484c8784c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a", Pod:"calico-apiserver-8484c8784c-78zl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0dfade95f97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.812 [INFO][6324] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.813 [INFO][6324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" iface="eth0" netns="" Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.813 [INFO][6324] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.813 [INFO][6324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.841 [INFO][6331] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" HandleID="k8s-pod-network.adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.841 [INFO][6331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.841 [INFO][6331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.848 [WARNING][6331] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" HandleID="k8s-pod-network.adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.848 [INFO][6331] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" HandleID="k8s-pod-network.adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.851 [INFO][6331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:14.866340 containerd[1975]: 2025-07-07 00:00:14.860 [INFO][6324] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:14.867100 containerd[1975]: time="2025-07-07T00:00:14.866371452Z" level=info msg="TearDown network for sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\" successfully" Jul 7 00:00:14.867100 containerd[1975]: time="2025-07-07T00:00:14.866427904Z" level=info msg="StopPodSandbox for \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\" returns successfully" Jul 7 00:00:14.868344 containerd[1975]: time="2025-07-07T00:00:14.867826573Z" level=info msg="RemovePodSandbox for \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\"" Jul 7 00:00:14.868344 containerd[1975]: time="2025-07-07T00:00:14.867892443Z" level=info msg="Forcibly stopping sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\"" Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.927 [WARNING][6345] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0", GenerateName:"calico-apiserver-8484c8784c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6136ec6-ffc6-441a-9474-e2f8829c266e", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8484c8784c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a", Pod:"calico-apiserver-8484c8784c-78zl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0dfade95f97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.927 [INFO][6345] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.927 [INFO][6345] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" iface="eth0" netns="" Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.927 [INFO][6345] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.927 [INFO][6345] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.961 [INFO][6352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" HandleID="k8s-pod-network.adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.961 [INFO][6352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.961 [INFO][6352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.968 [WARNING][6352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" HandleID="k8s-pod-network.adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.968 [INFO][6352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" HandleID="k8s-pod-network.adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--78zl4-eth0" Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.971 [INFO][6352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:14.975815 containerd[1975]: 2025-07-07 00:00:14.973 [INFO][6345] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116" Jul 7 00:00:14.976462 containerd[1975]: time="2025-07-07T00:00:14.975860738Z" level=info msg="TearDown network for sandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\" successfully" Jul 7 00:00:14.982145 containerd[1975]: time="2025-07-07T00:00:14.981846726Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:00:14.982145 containerd[1975]: time="2025-07-07T00:00:14.981936997Z" level=info msg="RemovePodSandbox \"adfec58f39a0c62ec8a8e5fb05a12f72b99c5284d71431810881d39398168116\" returns successfully" Jul 7 00:00:14.982946 containerd[1975]: time="2025-07-07T00:00:14.982902586Z" level=info msg="StopPodSandbox for \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\"" Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.032 [WARNING][6366] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0", GenerateName:"calico-apiserver-78dd578d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"09c6b849-d9f2-457c-9d21-c2403e3bc700", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78dd578d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce", Pod:"calico-apiserver-78dd578d87-r8llj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali691761c7e36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.032 [INFO][6366] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.032 [INFO][6366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" iface="eth0" netns="" Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.032 [INFO][6366] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.032 [INFO][6366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.076 [INFO][6373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" HandleID="k8s-pod-network.df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.076 [INFO][6373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.076 [INFO][6373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.084 [WARNING][6373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" HandleID="k8s-pod-network.df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.084 [INFO][6373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" HandleID="k8s-pod-network.df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.086 [INFO][6373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:15.099607 containerd[1975]: 2025-07-07 00:00:15.093 [INFO][6366] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:15.101934 containerd[1975]: time="2025-07-07T00:00:15.100456489Z" level=info msg="TearDown network for sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\" successfully" Jul 7 00:00:15.101934 containerd[1975]: time="2025-07-07T00:00:15.100492113Z" level=info msg="StopPodSandbox for \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\" returns successfully" Jul 7 00:00:15.102101 containerd[1975]: time="2025-07-07T00:00:15.102055435Z" level=info msg="RemovePodSandbox for \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\"" Jul 7 00:00:15.102177 containerd[1975]: time="2025-07-07T00:00:15.102116046Z" level=info msg="Forcibly stopping sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\"" Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.165 [WARNING][6388] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0", GenerateName:"calico-apiserver-78dd578d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"09c6b849-d9f2-457c-9d21-c2403e3bc700", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78dd578d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce", Pod:"calico-apiserver-78dd578d87-r8llj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali691761c7e36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.166 [INFO][6388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.166 [INFO][6388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" iface="eth0" netns="" Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.166 [INFO][6388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.166 [INFO][6388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.204 [INFO][6395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" HandleID="k8s-pod-network.df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.204 [INFO][6395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.205 [INFO][6395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.217 [WARNING][6395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" HandleID="k8s-pod-network.df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.217 [INFO][6395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" HandleID="k8s-pod-network.df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.223 [INFO][6395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:15.228697 containerd[1975]: 2025-07-07 00:00:15.226 [INFO][6388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1" Jul 7 00:00:15.228697 containerd[1975]: time="2025-07-07T00:00:15.228550390Z" level=info msg="TearDown network for sandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\" successfully" Jul 7 00:00:15.236067 containerd[1975]: time="2025-07-07T00:00:15.236010751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:00:15.236533 containerd[1975]: time="2025-07-07T00:00:15.236351773Z" level=info msg="RemovePodSandbox \"df55080a56c0e465670da966fe8f108e3b7c7427a9ee0642104dc6e963ca0bf1\" returns successfully" Jul 7 00:00:15.238007 containerd[1975]: time="2025-07-07T00:00:15.237418578Z" level=info msg="StopPodSandbox for \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\"" Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.298 [WARNING][6409] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0", GenerateName:"calico-kube-controllers-6868664579-", Namespace:"calico-system", SelfLink:"", UID:"f5549cc7-b328-4c09-b9a8-a657f9c3b244", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6868664579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe", Pod:"calico-kube-controllers-6868664579-646k8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aed4505c37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.299 [INFO][6409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.299 [INFO][6409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" iface="eth0" netns="" Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.299 [INFO][6409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.299 [INFO][6409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.337 [INFO][6418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" HandleID="k8s-pod-network.c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.338 [INFO][6418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.338 [INFO][6418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.351 [WARNING][6418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" HandleID="k8s-pod-network.c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.352 [INFO][6418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" HandleID="k8s-pod-network.c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.356 [INFO][6418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:15.362878 containerd[1975]: 2025-07-07 00:00:15.358 [INFO][6409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 7 00:00:15.365754 containerd[1975]: time="2025-07-07T00:00:15.363979732Z" level=info msg="TearDown network for sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\" successfully" Jul 7 00:00:15.365754 containerd[1975]: time="2025-07-07T00:00:15.364012925Z" level=info msg="StopPodSandbox for \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\" returns successfully" Jul 7 00:00:15.365754 containerd[1975]: time="2025-07-07T00:00:15.364900759Z" level=info msg="RemovePodSandbox for \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\"" Jul 7 00:00:15.365754 containerd[1975]: time="2025-07-07T00:00:15.364933942Z" level=info msg="Forcibly stopping sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\"" Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.440 [WARNING][6433] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0", GenerateName:"calico-kube-controllers-6868664579-", Namespace:"calico-system", SelfLink:"", UID:"f5549cc7-b328-4c09-b9a8-a657f9c3b244", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6868664579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe", Pod:"calico-kube-controllers-6868664579-646k8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aed4505c37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.440 [INFO][6433] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.440 [INFO][6433] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" iface="eth0" netns="" Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.440 [INFO][6433] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.440 [INFO][6433] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.473 [INFO][6445] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" HandleID="k8s-pod-network.c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.473 [INFO][6445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.473 [INFO][6445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.484 [WARNING][6445] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" HandleID="k8s-pod-network.c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.484 [INFO][6445] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" HandleID="k8s-pod-network.c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Workload="ip--172--31--21--95-k8s-calico--kube--controllers--6868664579--646k8-eth0" Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.486 [INFO][6445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:15.490978 containerd[1975]: 2025-07-07 00:00:15.488 [INFO][6433] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5" Jul 7 00:00:15.490978 containerd[1975]: time="2025-07-07T00:00:15.490610118Z" level=info msg="TearDown network for sandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\" successfully" Jul 7 00:00:15.498764 containerd[1975]: time="2025-07-07T00:00:15.498710105Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:00:15.498918 containerd[1975]: time="2025-07-07T00:00:15.498797367Z" level=info msg="RemovePodSandbox \"c94caa1cb06a78f5d0e49d347ac1c4fea1752ad3df75480c5fc8bc72f06cc2c5\" returns successfully" Jul 7 00:00:17.598746 containerd[1975]: time="2025-07-07T00:00:17.598668930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:17.599796 containerd[1975]: time="2025-07-07T00:00:17.599727401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:00:17.601783 containerd[1975]: time="2025-07-07T00:00:17.601721195Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:17.605353 containerd[1975]: time="2025-07-07T00:00:17.605206954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:17.607008 containerd[1975]: time="2025-07-07T00:00:17.606422039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 7.725162593s" Jul 7 00:00:17.607008 containerd[1975]: time="2025-07-07T00:00:17.606471854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:00:17.639412 containerd[1975]: time="2025-07-07T00:00:17.639371634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:00:17.935017 containerd[1975]: time="2025-07-07T00:00:17.934981087Z" level=info msg="CreateContainer within sandbox \"1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:00:17.954612 containerd[1975]: time="2025-07-07T00:00:17.954477196Z" level=info msg="CreateContainer within sandbox \"1e0b022d7a2b3fa27b69632dd96a0bc029a50f040a21f2bc9dadcbe4a8b4c8fe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6348f01b8dda969eb1dc481abda4da1994549b11062e2d70feb15b271c6022c2\"" Jul 7 00:00:17.964120 containerd[1975]: time="2025-07-07T00:00:17.964085894Z" level=info msg="StartContainer for \"6348f01b8dda969eb1dc481abda4da1994549b11062e2d70feb15b271c6022c2\"" Jul 7 00:00:18.126768 systemd[1]: Started cri-containerd-6348f01b8dda969eb1dc481abda4da1994549b11062e2d70feb15b271c6022c2.scope - libcontainer container 6348f01b8dda969eb1dc481abda4da1994549b11062e2d70feb15b271c6022c2. Jul 7 00:00:18.264339 containerd[1975]: time="2025-07-07T00:00:18.263098175Z" level=info msg="StartContainer for \"6348f01b8dda969eb1dc481abda4da1994549b11062e2d70feb15b271c6022c2\" returns successfully" Jul 7 00:00:18.588226 systemd[1]: Started sshd@12-172.31.21.95:22-147.75.109.163:52458.service - OpenSSH per-connection server daemon (147.75.109.163:52458). Jul 7 00:00:18.832336 sshd[6505]: Accepted publickey for core from 147.75.109.163 port 52458 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:18.838734 sshd[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:18.849526 systemd-logind[1952]: New session 13 of user core. Jul 7 00:00:18.858196 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:00:20.324389 kubelet[3195]: I0707 00:00:20.314081 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6868664579-646k8" podStartSLOduration=27.32879153 podStartE2EDuration="45.210656203s" podCreationTimestamp="2025-07-06 23:59:35 +0000 UTC" firstStartedPulling="2025-07-06 23:59:59.75725326 +0000 UTC m=+47.029181589" lastFinishedPulling="2025-07-07 00:00:17.639117928 +0000 UTC m=+64.911046262" observedRunningTime="2025-07-07 00:00:20.12603566 +0000 UTC m=+67.397964036" watchObservedRunningTime="2025-07-07 00:00:20.210656203 +0000 UTC m=+67.482584550" Jul 7 00:00:20.584669 sshd[6505]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:20.594460 systemd[1]: sshd@12-172.31.21.95:22-147.75.109.163:52458.service: Deactivated successfully. Jul 7 00:00:20.601052 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:00:20.605762 systemd-logind[1952]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:00:20.611028 systemd-logind[1952]: Removed session 13. Jul 7 00:00:21.278301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4193299609.mount: Deactivated successfully. Jul 7 00:00:22.114281 containerd[1975]: time="2025-07-07T00:00:22.114235720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:22.116907 containerd[1975]: time="2025-07-07T00:00:22.116815318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:00:22.119755 containerd[1975]: time="2025-07-07T00:00:22.118698582Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:22.122458 containerd[1975]: time="2025-07-07T00:00:22.122291369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:22.123176 containerd[1975]: time="2025-07-07T00:00:22.123064999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.483654843s" Jul 7 00:00:22.123176 containerd[1975]: time="2025-07-07T00:00:22.123100107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:00:22.149053 containerd[1975]: time="2025-07-07T00:00:22.148741541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:00:22.251424 containerd[1975]: time="2025-07-07T00:00:22.251348221Z" level=info msg="CreateContainer within sandbox \"1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:00:22.296162 containerd[1975]: time="2025-07-07T00:00:22.296112484Z" level=info msg="CreateContainer within sandbox \"1f48305019e25ebf4c29c05d049e9135ff931c8f2247ae7322f3ce0959c4ff25\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a502d555597e2e9a86af10b58804dac453d2cb64843640a5386627d9c8b4ad76\"" Jul 7 00:00:22.309752 containerd[1975]: time="2025-07-07T00:00:22.309711632Z" level=info msg="StartContainer for \"a502d555597e2e9a86af10b58804dac453d2cb64843640a5386627d9c8b4ad76\"" Jul 7 00:00:22.715095 systemd[1]: Started cri-containerd-a502d555597e2e9a86af10b58804dac453d2cb64843640a5386627d9c8b4ad76.scope - libcontainer container a502d555597e2e9a86af10b58804dac453d2cb64843640a5386627d9c8b4ad76. Jul 7 00:00:22.858619 containerd[1975]: time="2025-07-07T00:00:22.858541792Z" level=info msg="StartContainer for \"a502d555597e2e9a86af10b58804dac453d2cb64843640a5386627d9c8b4ad76\" returns successfully" Jul 7 00:00:23.646117 kubelet[3195]: I0707 00:00:23.645997 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-5rnc5" podStartSLOduration=29.009192631 podStartE2EDuration="49.645967342s" podCreationTimestamp="2025-07-06 23:59:34 +0000 UTC" firstStartedPulling="2025-07-07 00:00:01.50723737 +0000 UTC m=+48.779165715" lastFinishedPulling="2025-07-07 00:00:22.144012089 +0000 UTC m=+69.415940426" observedRunningTime="2025-07-07 00:00:23.61095859 +0000 UTC m=+70.882886944" watchObservedRunningTime="2025-07-07 00:00:23.645967342 +0000 UTC m=+70.917895688" Jul 7 00:00:25.577330 systemd[1]: run-containerd-runc-k8s.io-a502d555597e2e9a86af10b58804dac453d2cb64843640a5386627d9c8b4ad76-runc.1X76CI.mount: Deactivated successfully. Jul 7 00:00:25.656340 systemd[1]: Started sshd@13-172.31.21.95:22-147.75.109.163:52464.service - OpenSSH per-connection server daemon (147.75.109.163:52464). Jul 7 00:00:26.086030 sshd[6656]: Accepted publickey for core from 147.75.109.163 port 52464 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:26.091343 sshd[6656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:26.100000 systemd-logind[1952]: New session 14 of user core. Jul 7 00:00:26.107123 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:00:27.066364 sshd[6656]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:27.075393 systemd[1]: sshd@13-172.31.21.95:22-147.75.109.163:52464.service: Deactivated successfully. Jul 7 00:00:27.082129 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:00:27.086820 systemd-logind[1952]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:00:27.092664 systemd-logind[1952]: Removed session 14. Jul 7 00:00:27.552475 containerd[1975]: time="2025-07-07T00:00:27.552395064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:27.553915 containerd[1975]: time="2025-07-07T00:00:27.553808785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:00:27.554621 containerd[1975]: time="2025-07-07T00:00:27.554560068Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:27.569932 containerd[1975]: time="2025-07-07T00:00:27.569885459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:27.570919 containerd[1975]: time="2025-07-07T00:00:27.570844550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.42206291s" Jul 7 00:00:27.570919 containerd[1975]: time="2025-07-07T00:00:27.570901977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:00:27.578890 containerd[1975]: time="2025-07-07T00:00:27.578829561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:00:27.710894 containerd[1975]: time="2025-07-07T00:00:27.710641535Z" level=info msg="CreateContainer within sandbox \"3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:00:27.727727 containerd[1975]: time="2025-07-07T00:00:27.727678440Z" level=info msg="CreateContainer within sandbox \"3c4763e818561c7ea4bb7c80ebab540caeffdc822297a897a8773359cdb2947a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c491d9c5f7c95f815cf3fae887e530c674cf93c53ac83bd5036f0f7630ca3de0\"" Jul 7 00:00:27.728745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090981470.mount: Deactivated successfully. Jul 7 00:00:27.734025 containerd[1975]: time="2025-07-07T00:00:27.733863649Z" level=info msg="StartContainer for \"c491d9c5f7c95f815cf3fae887e530c674cf93c53ac83bd5036f0f7630ca3de0\"" Jul 7 00:00:27.842256 systemd[1]: Started cri-containerd-c491d9c5f7c95f815cf3fae887e530c674cf93c53ac83bd5036f0f7630ca3de0.scope - libcontainer container c491d9c5f7c95f815cf3fae887e530c674cf93c53ac83bd5036f0f7630ca3de0. Jul 7 00:00:27.915787 containerd[1975]: time="2025-07-07T00:00:27.915493546Z" level=info msg="StartContainer for \"c491d9c5f7c95f815cf3fae887e530c674cf93c53ac83bd5036f0f7630ca3de0\" returns successfully" Jul 7 00:00:28.691817 kubelet[3195]: I0707 00:00:28.691730 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8484c8784c-78zl4" podStartSLOduration=38.132605178 podStartE2EDuration="57.69170767s" podCreationTimestamp="2025-07-06 23:59:31 +0000 UTC" firstStartedPulling="2025-07-07 00:00:08.030322146 +0000 UTC m=+55.302250487" lastFinishedPulling="2025-07-07 00:00:27.589424635 +0000 UTC m=+74.861352979" observedRunningTime="2025-07-07 00:00:28.684672756 +0000 UTC m=+75.956601104" watchObservedRunningTime="2025-07-07 00:00:28.69170767 +0000 UTC m=+75.963636017" Jul 7 00:00:29.650339 kubelet[3195]: I0707 00:00:29.649935 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmqfq\" (UniqueName: \"kubernetes.io/projected/f0f45499-5f8a-4798-8d2e-3b52e2a85b5e-kube-api-access-xmqfq\") pod \"calico-apiserver-8484c8784c-mpzmz\" (UID: \"f0f45499-5f8a-4798-8d2e-3b52e2a85b5e\") " pod="calico-apiserver/calico-apiserver-8484c8784c-mpzmz" Jul 7 00:00:29.650339 kubelet[3195]: I0707 00:00:29.650027 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f0f45499-5f8a-4798-8d2e-3b52e2a85b5e-calico-apiserver-certs\") pod \"calico-apiserver-8484c8784c-mpzmz\" (UID: \"f0f45499-5f8a-4798-8d2e-3b52e2a85b5e\") " pod="calico-apiserver/calico-apiserver-8484c8784c-mpzmz" Jul 7 00:00:29.694489 systemd[1]: Created slice kubepods-besteffort-podf0f45499_5f8a_4798_8d2e_3b52e2a85b5e.slice - libcontainer container kubepods-besteffort-podf0f45499_5f8a_4798_8d2e_3b52e2a85b5e.slice. Jul 7 00:00:29.908035 containerd[1975]: time="2025-07-07T00:00:29.907897681Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:29.909637 containerd[1975]: time="2025-07-07T00:00:29.909008506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:00:29.945778 containerd[1975]: time="2025-07-07T00:00:29.944883152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.365990866s" Jul 7 00:00:29.945778 containerd[1975]: time="2025-07-07T00:00:29.944956955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:00:29.966123 containerd[1975]: time="2025-07-07T00:00:29.966082093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:00:29.973000 containerd[1975]: time="2025-07-07T00:00:29.972959467Z" level=info msg="CreateContainer within sandbox \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:00:29.998345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498859085.mount: Deactivated successfully. Jul 7 00:00:30.018830 containerd[1975]: time="2025-07-07T00:00:30.018697040Z" level=info msg="CreateContainer within sandbox \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8\"" Jul 7 00:00:30.024978 containerd[1975]: time="2025-07-07T00:00:30.024737825Z" level=info msg="StartContainer for \"32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8\"" Jul 7 00:00:30.121017 containerd[1975]: time="2025-07-07T00:00:30.120966718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8484c8784c-mpzmz,Uid:f0f45499-5f8a-4798-8d2e-3b52e2a85b5e,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:00:30.175262 systemd[1]: Started cri-containerd-32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8.scope - libcontainer container 32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8. Jul 7 00:00:30.277492 containerd[1975]: time="2025-07-07T00:00:30.275643616Z" level=info msg="StartContainer for \"32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8\" returns successfully" Jul 7 00:00:30.771479 kubelet[3195]: I0707 00:00:30.771287 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78dd578d87-r8llj" podStartSLOduration=39.122009511 podStartE2EDuration="1m0.771165553s" podCreationTimestamp="2025-07-06 23:59:30 +0000 UTC" firstStartedPulling="2025-07-07 00:00:08.316612561 +0000 UTC m=+55.588540887" lastFinishedPulling="2025-07-07 00:00:29.965768583 +0000 UTC m=+77.237696929" observedRunningTime="2025-07-07 00:00:30.768204812 +0000 UTC m=+78.040133162" watchObservedRunningTime="2025-07-07 00:00:30.771165553 +0000 UTC m=+78.043093898" Jul 7 00:00:31.643192 systemd-networkd[1816]: cali3f14a556a2f: Link UP Jul 7 00:00:31.645385 systemd-networkd[1816]: cali3f14a556a2f: Gained carrier Jul 7 00:00:31.679902 (udev-worker)[6785]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:30.988 [INFO][6766] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0 calico-apiserver-8484c8784c- calico-apiserver f0f45499-5f8a-4798-8d2e-3b52e2a85b5e 1236 0 2025-07-07 00:00:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8484c8784c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-95 calico-apiserver-8484c8784c-mpzmz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3f14a556a2f [] [] }} ContainerID="1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-mpzmz" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:30.991 [INFO][6766] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-mpzmz" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.458 [INFO][6778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" HandleID="k8s-pod-network.1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.462 [INFO][6778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" HandleID="k8s-pod-network.1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003001c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-95", "pod":"calico-apiserver-8484c8784c-mpzmz", "timestamp":"2025-07-07 00:00:31.458853419 +0000 UTC"}, Hostname:"ip-172-31-21-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.470 [INFO][6778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.471 [INFO][6778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.471 [INFO][6778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-95' Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.499 [INFO][6778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" host="ip-172-31-21-95" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.522 [INFO][6778] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-95" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.532 [INFO][6778] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.535 [INFO][6778] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.539 [INFO][6778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ip-172-31-21-95" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.539 [INFO][6778] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" host="ip-172-31-21-95" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.542 [INFO][6778] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42 Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.549 [INFO][6778] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" host="ip-172-31-21-95" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.582 [INFO][6778] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.138/26] block=192.168.15.128/26 handle="k8s-pod-network.1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" host="ip-172-31-21-95" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.582 [INFO][6778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.138/26] handle="k8s-pod-network.1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" host="ip-172-31-21-95" Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.582 [INFO][6778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:31.708905 containerd[1975]: 2025-07-07 00:00:31.583 [INFO][6778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.138/26] IPv6=[] ContainerID="1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" HandleID="k8s-pod-network.1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" Workload="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0" Jul 7 00:00:31.722747 containerd[1975]: 2025-07-07 00:00:31.594 [INFO][6766] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-mpzmz" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0", GenerateName:"calico-apiserver-8484c8784c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0f45499-5f8a-4798-8d2e-3b52e2a85b5e", ResourceVersion:"1236", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8484c8784c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"", Pod:"calico-apiserver-8484c8784c-mpzmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f14a556a2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:31.722747 containerd[1975]: 2025-07-07 00:00:31.596 [INFO][6766] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.138/32] ContainerID="1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-mpzmz" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0" Jul 7 00:00:31.722747 containerd[1975]: 2025-07-07 00:00:31.596 [INFO][6766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f14a556a2f ContainerID="1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-mpzmz" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0" Jul 7 00:00:31.722747 containerd[1975]: 2025-07-07 00:00:31.652 [INFO][6766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-mpzmz" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0" Jul 7 00:00:31.722747 containerd[1975]: 2025-07-07 00:00:31.658 [INFO][6766] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-mpzmz" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0", GenerateName:"calico-apiserver-8484c8784c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0f45499-5f8a-4798-8d2e-3b52e2a85b5e", ResourceVersion:"1236", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8484c8784c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-95", ContainerID:"1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42", Pod:"calico-apiserver-8484c8784c-mpzmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f14a556a2f", MAC:"be:ea:d4:cf:d0:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:31.722747 containerd[1975]: 2025-07-07 00:00:31.695 [INFO][6766] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42" Namespace="calico-apiserver" Pod="calico-apiserver-8484c8784c-mpzmz" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--8484c8784c--mpzmz-eth0" Jul 7 00:00:31.959751 kubelet[3195]: I0707 00:00:31.958279 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:00:32.005980 containerd[1975]: time="2025-07-07T00:00:31.949607638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:32.005980 containerd[1975]: time="2025-07-07T00:00:31.972199025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:32.053131 containerd[1975]: time="2025-07-07T00:00:31.972232719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:32.053131 containerd[1975]: time="2025-07-07T00:00:32.004967117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:32.210170 systemd[1]: Started sshd@14-172.31.21.95:22-147.75.109.163:48712.service - OpenSSH per-connection server daemon (147.75.109.163:48712). Jul 7 00:00:32.467113 systemd[1]: Started cri-containerd-1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42.scope - libcontainer container 1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42. Jul 7 00:00:32.601327 sshd[6813]: Accepted publickey for core from 147.75.109.163 port 48712 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:32.609795 sshd[6813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:32.631807 systemd-logind[1952]: New session 15 of user core. Jul 7 00:00:32.635145 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:00:32.754362 containerd[1975]: time="2025-07-07T00:00:32.754190088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8484c8784c-mpzmz,Uid:f0f45499-5f8a-4798-8d2e-3b52e2a85b5e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42\"" Jul 7 00:00:33.319636 systemd-networkd[1816]: cali3f14a556a2f: Gained IPv6LL Jul 7 00:00:33.434136 containerd[1975]: time="2025-07-07T00:00:33.433898500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:33.462487 containerd[1975]: time="2025-07-07T00:00:33.462401643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:00:33.463231 containerd[1975]: time="2025-07-07T00:00:33.463180408Z" level=info msg="CreateContainer within sandbox \"1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:00:33.556384 containerd[1975]: time="2025-07-07T00:00:33.556208468Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:33.562912 containerd[1975]: time="2025-07-07T00:00:33.562738338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:33.567071 containerd[1975]: time="2025-07-07T00:00:33.566916167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.598479263s" Jul 7 00:00:33.567071 containerd[1975]: time="2025-07-07T00:00:33.566995112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:00:33.671155 containerd[1975]: time="2025-07-07T00:00:33.670879495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:00:33.682462 containerd[1975]: time="2025-07-07T00:00:33.682370855Z" level=info msg="CreateContainer within sandbox \"b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:00:33.880177 containerd[1975]: time="2025-07-07T00:00:33.879119733Z" level=info msg="CreateContainer within sandbox \"1faff849804d0f51e7a23b636ef34b980b37a563ecfec8bb4912e9aa91a33c42\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1417a472c85c364d71a9035d379b10d29de4274566cb162925cf62d43c12ac11\"" Jul 7 00:00:33.886565 containerd[1975]: time="2025-07-07T00:00:33.885547833Z" level=info msg="CreateContainer within sandbox \"b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9e0fc05bf4576f0e66a3906ef378625521b396a083cdea0565cdd637aeac63ee\"" Jul 7 00:00:33.887431 containerd[1975]: time="2025-07-07T00:00:33.887388509Z" level=info msg="StartContainer for \"9e0fc05bf4576f0e66a3906ef378625521b396a083cdea0565cdd637aeac63ee\"" Jul 7 00:00:33.889959 containerd[1975]: time="2025-07-07T00:00:33.889641182Z" level=info msg="StartContainer for \"1417a472c85c364d71a9035d379b10d29de4274566cb162925cf62d43c12ac11\"" Jul 7 00:00:34.159646 systemd[1]: Started cri-containerd-1417a472c85c364d71a9035d379b10d29de4274566cb162925cf62d43c12ac11.scope - libcontainer container 1417a472c85c364d71a9035d379b10d29de4274566cb162925cf62d43c12ac11. Jul 7 00:00:34.182220 systemd[1]: Started cri-containerd-9e0fc05bf4576f0e66a3906ef378625521b396a083cdea0565cdd637aeac63ee.scope - libcontainer container 9e0fc05bf4576f0e66a3906ef378625521b396a083cdea0565cdd637aeac63ee. Jul 7 00:00:34.366198 containerd[1975]: time="2025-07-07T00:00:34.364199783Z" level=info msg="StartContainer for \"9e0fc05bf4576f0e66a3906ef378625521b396a083cdea0565cdd637aeac63ee\" returns successfully" Jul 7 00:00:34.434032 containerd[1975]: time="2025-07-07T00:00:34.431130637Z" level=info msg="StartContainer for \"1417a472c85c364d71a9035d379b10d29de4274566cb162925cf62d43c12ac11\" returns successfully" Jul 7 00:00:34.837486 sshd[6813]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:34.857483 systemd[1]: sshd@14-172.31.21.95:22-147.75.109.163:48712.service: Deactivated successfully. Jul 7 00:00:34.870186 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:00:34.871144 systemd-logind[1952]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:00:34.920140 systemd[1]: Started sshd@15-172.31.21.95:22-147.75.109.163:48716.service - OpenSSH per-connection server daemon (147.75.109.163:48716). Jul 7 00:00:34.924535 systemd-logind[1952]: Removed session 15. Jul 7 00:00:35.318033 sshd[6931]: Accepted publickey for core from 147.75.109.163 port 48716 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:35.329269 sshd[6931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:35.358796 systemd-logind[1952]: New session 16 of user core. Jul 7 00:00:35.361093 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:00:36.008303 ntpd[1944]: Listen normally on 18 cali3f14a556a2f [fe80::ecee:eeff:feee:eeee%16]:123 Jul 7 00:00:36.014359 ntpd[1944]: 7 Jul 00:00:36 ntpd[1944]: Listen normally on 18 cali3f14a556a2f [fe80::ecee:eeff:feee:eeee%16]:123 Jul 7 00:00:36.139513 sshd[6931]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:36.150215 systemd-logind[1952]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:00:36.151134 systemd[1]: sshd@15-172.31.21.95:22-147.75.109.163:48716.service: Deactivated successfully. Jul 7 00:00:36.155816 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:00:36.179631 systemd[1]: Started sshd@16-172.31.21.95:22-147.75.109.163:34930.service - OpenSSH per-connection server daemon (147.75.109.163:34930). Jul 7 00:00:36.182950 systemd-logind[1952]: Removed session 16. Jul 7 00:00:36.443151 containerd[1975]: time="2025-07-07T00:00:36.442971409Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:36.443151 containerd[1975]: time="2025-07-07T00:00:36.443092358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:00:36.445335 sshd[6952]: Accepted publickey for core from 147.75.109.163 port 34930 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:36.447299 sshd[6952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:36.454698 containerd[1975]: time="2025-07-07T00:00:36.454635382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.783677055s" Jul 7 00:00:36.454835 containerd[1975]: time="2025-07-07T00:00:36.454703698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:00:36.456788 containerd[1975]: time="2025-07-07T00:00:36.456064061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:00:36.464762 systemd-logind[1952]: New session 17 of user core. Jul 7 00:00:36.468475 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:00:36.504660 containerd[1975]: time="2025-07-07T00:00:36.504609117Z" level=info msg="CreateContainer within sandbox \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:00:36.564896 containerd[1975]: time="2025-07-07T00:00:36.559617057Z" level=info msg="CreateContainer within sandbox \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382\"" Jul 7 00:00:36.572896 containerd[1975]: time="2025-07-07T00:00:36.571413741Z" level=info msg="StartContainer for \"06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382\"" Jul 7 00:00:36.900525 systemd[1]: run-containerd-runc-k8s.io-06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382-runc.D9Ly3J.mount: Deactivated successfully. Jul 7 00:00:36.937102 systemd[1]: Started cri-containerd-06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382.scope - libcontainer container 06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382. Jul 7 00:00:37.091934 kubelet[3195]: I0707 00:00:37.084997 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:00:37.177305 sshd[6952]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:37.201745 systemd[1]: sshd@16-172.31.21.95:22-147.75.109.163:34930.service: Deactivated successfully. Jul 7 00:00:37.209767 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:00:37.219093 systemd-logind[1952]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:00:37.223028 systemd-logind[1952]: Removed session 17. Jul 7 00:00:37.342513 containerd[1975]: time="2025-07-07T00:00:37.341262596Z" level=info msg="StartContainer for \"06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382\" returns successfully" Jul 7 00:00:38.597250 kubelet[3195]: I0707 00:00:38.393334 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8484c8784c-mpzmz" podStartSLOduration=9.365984336 podStartE2EDuration="9.365984336s" podCreationTimestamp="2025-07-07 00:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:34.878812415 +0000 UTC m=+82.150740761" watchObservedRunningTime="2025-07-07 00:00:38.365984336 +0000 UTC m=+85.637912683" Jul 7 00:00:38.881950 containerd[1975]: time="2025-07-07T00:00:38.881674116Z" level=info msg="StopContainer for \"06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382\" with timeout 30 (s)" Jul 7 00:00:38.913281 containerd[1975]: time="2025-07-07T00:00:38.912695482Z" level=info msg="Stop container \"06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382\" with signal terminated" Jul 7 00:00:39.338982 systemd[1]: cri-containerd-06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382.scope: Deactivated successfully. Jul 7 00:00:39.613547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382-rootfs.mount: Deactivated successfully. Jul 7 00:00:39.671477 containerd[1975]: time="2025-07-07T00:00:39.634271601Z" level=info msg="shim disconnected" id=06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382 namespace=k8s.io Jul 7 00:00:39.678041 containerd[1975]: time="2025-07-07T00:00:39.677987401Z" level=warning msg="cleaning up after shim disconnected" id=06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382 namespace=k8s.io Jul 7 00:00:39.678041 containerd[1975]: time="2025-07-07T00:00:39.678031114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:00:40.010672 containerd[1975]: time="2025-07-07T00:00:40.010619689Z" level=info msg="StopContainer for \"06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382\" returns successfully" Jul 7 00:00:40.073509 containerd[1975]: time="2025-07-07T00:00:40.073427093Z" level=info msg="StopPodSandbox for \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\"" Jul 7 00:00:40.096706 containerd[1975]: time="2025-07-07T00:00:40.096650172Z" level=info msg="Container to stop \"06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:00:40.119166 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643-shm.mount: Deactivated successfully. Jul 7 00:00:40.129286 systemd[1]: cri-containerd-4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643.scope: Deactivated successfully. Jul 7 00:00:40.246167 containerd[1975]: time="2025-07-07T00:00:40.246086619Z" level=info msg="shim disconnected" id=4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643 namespace=k8s.io Jul 7 00:00:40.247596 containerd[1975]: time="2025-07-07T00:00:40.246456814Z" level=warning msg="cleaning up after shim disconnected" id=4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643 namespace=k8s.io Jul 7 00:00:40.247596 containerd[1975]: time="2025-07-07T00:00:40.246478248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:00:40.248354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643-rootfs.mount: Deactivated successfully. Jul 7 00:00:41.101838 systemd-networkd[1816]: calid7b233dadea: Link DOWN Jul 7 00:00:41.101849 systemd-networkd[1816]: calid7b233dadea: Lost carrier Jul 7 00:00:41.501455 kubelet[3195]: I0707 00:00:41.484320 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78dd578d87-hbf8l" podStartSLOduration=44.090641059 podStartE2EDuration="1m11.401559241s" podCreationTimestamp="2025-07-06 23:59:30 +0000 UTC" firstStartedPulling="2025-07-07 00:00:09.144712039 +0000 UTC m=+56.416640377" lastFinishedPulling="2025-07-07 00:00:36.455630228 +0000 UTC m=+83.727558559" observedRunningTime="2025-07-07 00:00:38.597221 +0000 UTC m=+85.869149338" watchObservedRunningTime="2025-07-07 00:00:41.401559241 +0000 UTC m=+88.673487594" Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.053 [INFO][7106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.059 [INFO][7106] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" iface="eth0" netns="/var/run/netns/cni-50e1e547-687c-aa74-8ad1-18b516ef7879" Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.060 [INFO][7106] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" iface="eth0" netns="/var/run/netns/cni-50e1e547-687c-aa74-8ad1-18b516ef7879" Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.095 [INFO][7106] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" after=35.740567ms iface="eth0" netns="/var/run/netns/cni-50e1e547-687c-aa74-8ad1-18b516ef7879" Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.095 [INFO][7106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.095 [INFO][7106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.652 [INFO][7127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.658 [INFO][7127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.659 [INFO][7127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.822 [INFO][7127] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.822 [INFO][7127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.836 [INFO][7127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:41.861545 containerd[1975]: 2025-07-07 00:00:41.848 [INFO][7106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:00:41.898836 systemd[1]: run-netns-cni\x2d50e1e547\x2d687c\x2daa74\x2d8ad1\x2d18b516ef7879.mount: Deactivated successfully. Jul 7 00:00:41.943066 kubelet[3195]: I0707 00:00:41.943007 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:00:42.046303 containerd[1975]: time="2025-07-07T00:00:42.046235959Z" level=info msg="TearDown network for sandbox \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\" successfully" Jul 7 00:00:42.046303 containerd[1975]: time="2025-07-07T00:00:42.046301681Z" level=info msg="StopPodSandbox for \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\" returns successfully" Jul 7 00:00:42.272345 systemd[1]: Started sshd@17-172.31.21.95:22-147.75.109.163:34940.service - OpenSSH per-connection server daemon (147.75.109.163:34940). Jul 7 00:00:42.356917 kubelet[3195]: I0707 00:00:42.355673 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:00:42.759988 sshd[7147]: Accepted publickey for core from 147.75.109.163 port 34940 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:42.772255 sshd[7147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:42.800146 systemd-logind[1952]: New session 18 of user core. Jul 7 00:00:42.811192 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:00:43.386515 containerd[1975]: time="2025-07-07T00:00:43.386417963Z" level=info msg="StopContainer for \"32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8\" with timeout 30 (s)" Jul 7 00:00:44.112618 kubelet[3195]: I0707 00:00:44.112344 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4073ee90-8739-4135-b438-25bdb06e58b4-calico-apiserver-certs\") pod \"4073ee90-8739-4135-b438-25bdb06e58b4\" (UID: \"4073ee90-8739-4135-b438-25bdb06e58b4\") " Jul 7 00:00:44.112618 kubelet[3195]: I0707 00:00:44.112454 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ddkh\" (UniqueName: \"kubernetes.io/projected/4073ee90-8739-4135-b438-25bdb06e58b4-kube-api-access-5ddkh\") pod \"4073ee90-8739-4135-b438-25bdb06e58b4\" (UID: \"4073ee90-8739-4135-b438-25bdb06e58b4\") " Jul 7 00:00:44.159067 containerd[1975]: time="2025-07-07T00:00:44.157961392Z" level=info msg="Stop container \"32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8\" with signal terminated" Jul 7 00:00:44.268862 systemd[1]: var-lib-kubelet-pods-4073ee90\x2d8739\x2d4135\x2db438\x2d25bdb06e58b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5ddkh.mount: Deactivated successfully. Jul 7 00:00:44.269509 systemd[1]: var-lib-kubelet-pods-4073ee90\x2d8739\x2d4135\x2db438\x2d25bdb06e58b4-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 7 00:00:44.299697 kubelet[3195]: I0707 00:00:44.291925 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4073ee90-8739-4135-b438-25bdb06e58b4-kube-api-access-5ddkh" (OuterVolumeSpecName: "kube-api-access-5ddkh") pod "4073ee90-8739-4135-b438-25bdb06e58b4" (UID: "4073ee90-8739-4135-b438-25bdb06e58b4"). InnerVolumeSpecName "kube-api-access-5ddkh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:00:44.302641 kubelet[3195]: I0707 00:00:44.289057 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4073ee90-8739-4135-b438-25bdb06e58b4-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "4073ee90-8739-4135-b438-25bdb06e58b4" (UID: "4073ee90-8739-4135-b438-25bdb06e58b4"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:00:44.316603 kubelet[3195]: I0707 00:00:44.316516 3195 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5ddkh\" (UniqueName: \"kubernetes.io/projected/4073ee90-8739-4135-b438-25bdb06e58b4-kube-api-access-5ddkh\") on node \"ip-172-31-21-95\" DevicePath \"\"" Jul 7 00:00:44.316603 kubelet[3195]: I0707 00:00:44.316555 3195 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4073ee90-8739-4135-b438-25bdb06e58b4-calico-apiserver-certs\") on node \"ip-172-31-21-95\" DevicePath \"\"" Jul 7 00:00:44.399036 systemd[1]: cri-containerd-32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8.scope: Deactivated successfully. Jul 7 00:00:44.399673 systemd[1]: cri-containerd-32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8.scope: Consumed 2.007s CPU time. Jul 7 00:00:44.566604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8-rootfs.mount: Deactivated successfully. Jul 7 00:00:44.588730 containerd[1975]: time="2025-07-07T00:00:44.584224402Z" level=info msg="shim disconnected" id=32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8 namespace=k8s.io Jul 7 00:00:44.589629 containerd[1975]: time="2025-07-07T00:00:44.589117690Z" level=warning msg="cleaning up after shim disconnected" id=32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8 namespace=k8s.io Jul 7 00:00:44.589629 containerd[1975]: time="2025-07-07T00:00:44.589162167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:00:44.680717 containerd[1975]: time="2025-07-07T00:00:44.648259930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:00:44.680717 containerd[1975]: time="2025-07-07T00:00:44.661824998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:44.682542 systemd[1]: Removed slice kubepods-besteffort-pod4073ee90_8739_4135_b438_25bdb06e58b4.slice - libcontainer container kubepods-besteffort-pod4073ee90_8739_4135_b438_25bdb06e58b4.slice. Jul 7 00:00:44.724954 containerd[1975]: time="2025-07-07T00:00:44.723988343Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:44.788505 containerd[1975]: time="2025-07-07T00:00:44.788455125Z" level=info msg="StopContainer for \"32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8\" returns successfully" Jul 7 00:00:44.791919 containerd[1975]: time="2025-07-07T00:00:44.790216451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:44.797547 containerd[1975]: time="2025-07-07T00:00:44.795886490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 8.336924352s" Jul 7 00:00:44.813569 containerd[1975]: time="2025-07-07T00:00:44.809955868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:00:44.842041 containerd[1975]: time="2025-07-07T00:00:44.841393797Z" level=info msg="StopPodSandbox for \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\"" Jul 7 00:00:44.857887 containerd[1975]: time="2025-07-07T00:00:44.856417373Z" level=info msg="Container to stop \"32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:00:44.877842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce-shm.mount: Deactivated successfully. Jul 7 00:00:44.893280 systemd[1]: cri-containerd-a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce.scope: Deactivated successfully. Jul 7 00:00:44.899500 kubelet[3195]: I0707 00:00:44.899438 3195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4073ee90-8739-4135-b438-25bdb06e58b4" path="/var/lib/kubelet/pods/4073ee90-8739-4135-b438-25bdb06e58b4/volumes" Jul 7 00:00:44.908269 sshd[7147]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:44.934214 systemd-logind[1952]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:00:44.934989 systemd[1]: sshd@17-172.31.21.95:22-147.75.109.163:34940.service: Deactivated successfully. Jul 7 00:00:44.937377 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:00:44.944764 systemd-logind[1952]: Removed session 18. Jul 7 00:00:44.976811 containerd[1975]: time="2025-07-07T00:00:44.976384263Z" level=info msg="shim disconnected" id=a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce namespace=k8s.io Jul 7 00:00:44.978057 containerd[1975]: time="2025-07-07T00:00:44.977927297Z" level=warning msg="cleaning up after shim disconnected" id=a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce namespace=k8s.io Jul 7 00:00:44.978057 containerd[1975]: time="2025-07-07T00:00:44.977982053Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:00:44.979732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce-rootfs.mount: Deactivated successfully. Jul 7 00:00:45.008678 ntpd[1944]: Deleting interface #16 calid7b233dadea, fe80::ecee:eeff:feee:eeee%14#123, interface stats: received=0, sent=0, dropped=0, active_time=32 secs Jul 7 00:00:45.011060 ntpd[1944]: 7 Jul 00:00:45 ntpd[1944]: Deleting interface #16 calid7b233dadea, fe80::ecee:eeff:feee:eeee%14#123, interface stats: received=0, sent=0, dropped=0, active_time=32 secs Jul 7 00:00:45.021785 containerd[1975]: time="2025-07-07T00:00:45.021683891Z" level=info msg="CreateContainer within sandbox \"b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:00:45.111032 containerd[1975]: time="2025-07-07T00:00:45.110981165Z" level=info msg="CreateContainer within sandbox \"b6d0de9c5c6f8e5c51306884de7cb32099bc31f7beabc9094ce2e6560b21c42b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"21e6de8927f3a57f63f0d14602f9eced5725550c498ef3b173253010a6e8e320\"" Jul 7 00:00:45.126743 containerd[1975]: time="2025-07-07T00:00:45.126473951Z" level=info msg="StartContainer for \"21e6de8927f3a57f63f0d14602f9eced5725550c498ef3b173253010a6e8e320\"" Jul 7 00:00:45.184984 systemd[1]: Started cri-containerd-21e6de8927f3a57f63f0d14602f9eced5725550c498ef3b173253010a6e8e320.scope - libcontainer container 21e6de8927f3a57f63f0d14602f9eced5725550c498ef3b173253010a6e8e320. Jul 7 00:00:45.287684 containerd[1975]: time="2025-07-07T00:00:45.287191828Z" level=info msg="StartContainer for \"21e6de8927f3a57f63f0d14602f9eced5725550c498ef3b173253010a6e8e320\" returns successfully" Jul 7 00:00:45.596556 kubelet[3195]: I0707 00:00:45.596338 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:00:45.679457 kubelet[3195]: I0707 00:00:45.636437 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lrnkv" podStartSLOduration=34.326682178 podStartE2EDuration="1m10.622841606s" podCreationTimestamp="2025-07-06 23:59:35 +0000 UTC" firstStartedPulling="2025-07-07 00:00:08.551043882 +0000 UTC m=+55.822972217" lastFinishedPulling="2025-07-07 00:00:44.847203321 +0000 UTC m=+92.119131645" observedRunningTime="2025-07-07 00:00:45.622525647 +0000 UTC m=+92.894453991" watchObservedRunningTime="2025-07-07 00:00:45.622841606 +0000 UTC m=+92.894769952" Jul 7 00:00:45.768967 systemd-networkd[1816]: cali691761c7e36: Link DOWN Jul 7 00:00:45.772469 systemd-networkd[1816]: cali691761c7e36: Lost carrier Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:45.749 [INFO][7242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:45.753 [INFO][7242] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" iface="eth0" netns="/var/run/netns/cni-1a146f60-8129-f2af-c881-55f56a333920" Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:45.754 [INFO][7242] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" iface="eth0" netns="/var/run/netns/cni-1a146f60-8129-f2af-c881-55f56a333920" Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:45.776 [INFO][7242] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" after=22.634384ms iface="eth0" netns="/var/run/netns/cni-1a146f60-8129-f2af-c881-55f56a333920" Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:45.776 [INFO][7242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:45.777 [INFO][7242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:46.283 [INFO][7286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:46.286 [INFO][7286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:46.286 [INFO][7286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:46.404 [INFO][7286] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:46.404 [INFO][7286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:46.407 [INFO][7286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:46.475526 containerd[1975]: 2025-07-07 00:00:46.426 [INFO][7242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:00:46.492777 systemd[1]: run-netns-cni\x2d1a146f60\x2d8129\x2df2af\x2dc881\x2d55f56a333920.mount: Deactivated successfully. Jul 7 00:00:46.529382 containerd[1975]: time="2025-07-07T00:00:46.476607008Z" level=info msg="TearDown network for sandbox \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\" successfully" Jul 7 00:00:46.540138 containerd[1975]: time="2025-07-07T00:00:46.539670801Z" level=info msg="StopPodSandbox for \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\" returns successfully" Jul 7 00:00:46.811510 kubelet[3195]: I0707 00:00:46.810234 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/09c6b849-d9f2-457c-9d21-c2403e3bc700-calico-apiserver-certs\") pod \"09c6b849-d9f2-457c-9d21-c2403e3bc700\" (UID: \"09c6b849-d9f2-457c-9d21-c2403e3bc700\") " Jul 7 00:00:46.811510 kubelet[3195]: I0707 00:00:46.810276 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9ht8\" (UniqueName: \"kubernetes.io/projected/09c6b849-d9f2-457c-9d21-c2403e3bc700-kube-api-access-z9ht8\") pod \"09c6b849-d9f2-457c-9d21-c2403e3bc700\" (UID: \"09c6b849-d9f2-457c-9d21-c2403e3bc700\") " Jul 7 00:00:46.887053 kubelet[3195]: I0707 00:00:46.887007 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09c6b849-d9f2-457c-9d21-c2403e3bc700-kube-api-access-z9ht8" (OuterVolumeSpecName: "kube-api-access-z9ht8") pod "09c6b849-d9f2-457c-9d21-c2403e3bc700" (UID: "09c6b849-d9f2-457c-9d21-c2403e3bc700"). InnerVolumeSpecName "kube-api-access-z9ht8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:00:46.891313 kubelet[3195]: I0707 00:00:46.891256 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c6b849-d9f2-457c-9d21-c2403e3bc700-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "09c6b849-d9f2-457c-9d21-c2403e3bc700" (UID: "09c6b849-d9f2-457c-9d21-c2403e3bc700"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:00:46.892455 systemd[1]: var-lib-kubelet-pods-09c6b849\x2dd9f2\x2d457c\x2d9d21\x2dc2403e3bc700-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz9ht8.mount: Deactivated successfully. Jul 7 00:00:46.892678 systemd[1]: var-lib-kubelet-pods-09c6b849\x2dd9f2\x2d457c\x2d9d21\x2dc2403e3bc700-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 7 00:00:46.911329 kubelet[3195]: I0707 00:00:46.911270 3195 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/09c6b849-d9f2-457c-9d21-c2403e3bc700-calico-apiserver-certs\") on node \"ip-172-31-21-95\" DevicePath \"\"" Jul 7 00:00:46.911329 kubelet[3195]: I0707 00:00:46.911307 3195 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z9ht8\" (UniqueName: \"kubernetes.io/projected/09c6b849-d9f2-457c-9d21-c2403e3bc700-kube-api-access-z9ht8\") on node \"ip-172-31-21-95\" DevicePath \"\"" Jul 7 00:00:46.960743 kubelet[3195]: I0707 00:00:46.960697 3195 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:00:46.965716 kubelet[3195]: I0707 00:00:46.965652 3195 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:00:47.795570 systemd[1]: Removed slice kubepods-besteffort-pod09c6b849_d9f2_457c_9d21_c2403e3bc700.slice - libcontainer container kubepods-besteffort-pod09c6b849_d9f2_457c_9d21_c2403e3bc700.slice. Jul 7 00:00:47.795667 systemd[1]: kubepods-besteffort-pod09c6b849_d9f2_457c_9d21_c2403e3bc700.slice: Consumed 2.058s CPU time. Jul 7 00:00:48.008199 ntpd[1944]: Deleting interface #14 cali691761c7e36, fe80::ecee:eeff:feee:eeee%12#123, interface stats: received=0, sent=0, dropped=0, active_time=35 secs Jul 7 00:00:48.010445 ntpd[1944]: 7 Jul 00:00:48 ntpd[1944]: Deleting interface #14 cali691761c7e36, fe80::ecee:eeff:feee:eeee%12#123, interface stats: received=0, sent=0, dropped=0, active_time=35 secs Jul 7 00:00:48.905831 kubelet[3195]: I0707 00:00:48.903977 3195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09c6b849-d9f2-457c-9d21-c2403e3bc700" path="/var/lib/kubelet/pods/09c6b849-d9f2-457c-9d21-c2403e3bc700/volumes" Jul 7 00:00:49.964249 systemd[1]: Started sshd@18-172.31.21.95:22-147.75.109.163:40742.service - OpenSSH per-connection server daemon (147.75.109.163:40742). Jul 7 00:00:50.243737 sshd[7304]: Accepted publickey for core from 147.75.109.163 port 40742 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:50.248057 sshd[7304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:50.256300 systemd-logind[1952]: New session 19 of user core. Jul 7 00:00:50.259182 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:00:50.751482 systemd[1]: run-containerd-runc-k8s.io-6348f01b8dda969eb1dc481abda4da1994549b11062e2d70feb15b271c6022c2-runc.99iip2.mount: Deactivated successfully. Jul 7 00:00:51.664312 sshd[7304]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:51.672071 systemd[1]: sshd@18-172.31.21.95:22-147.75.109.163:40742.service: Deactivated successfully. Jul 7 00:00:51.674719 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:00:51.675500 systemd-logind[1952]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:00:51.676844 systemd-logind[1952]: Removed session 19. Jul 7 00:00:55.576394 systemd[1]: run-containerd-runc-k8s.io-a502d555597e2e9a86af10b58804dac453d2cb64843640a5386627d9c8b4ad76-runc.ZsJlhy.mount: Deactivated successfully. Jul 7 00:00:56.706231 systemd[1]: Started sshd@19-172.31.21.95:22-147.75.109.163:41596.service - OpenSSH per-connection server daemon (147.75.109.163:41596). Jul 7 00:00:57.009953 sshd[7368]: Accepted publickey for core from 147.75.109.163 port 41596 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:57.014728 sshd[7368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:57.022954 systemd-logind[1952]: New session 20 of user core. Jul 7 00:00:57.028223 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:00:57.834982 systemd[1]: run-containerd-runc-k8s.io-6348f01b8dda969eb1dc481abda4da1994549b11062e2d70feb15b271c6022c2-runc.sPy7Qp.mount: Deactivated successfully. Jul 7 00:00:58.140023 sshd[7368]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:58.146654 systemd[1]: sshd@19-172.31.21.95:22-147.75.109.163:41596.service: Deactivated successfully. Jul 7 00:00:58.150194 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:00:58.152293 systemd-logind[1952]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:00:58.154373 systemd-logind[1952]: Removed session 20. Jul 7 00:00:58.176509 systemd[1]: Started sshd@20-172.31.21.95:22-147.75.109.163:41610.service - OpenSSH per-connection server daemon (147.75.109.163:41610). Jul 7 00:00:58.367419 sshd[7402]: Accepted publickey for core from 147.75.109.163 port 41610 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:58.368049 sshd[7402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:58.373773 systemd-logind[1952]: New session 21 of user core. Jul 7 00:00:58.379255 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:00:59.059831 sshd[7402]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:59.063744 systemd[1]: sshd@20-172.31.21.95:22-147.75.109.163:41610.service: Deactivated successfully. Jul 7 00:00:59.066036 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:00:59.066892 systemd-logind[1952]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:00:59.067968 systemd-logind[1952]: Removed session 21. Jul 7 00:00:59.094351 systemd[1]: Started sshd@21-172.31.21.95:22-147.75.109.163:41612.service - OpenSSH per-connection server daemon (147.75.109.163:41612). Jul 7 00:00:59.303782 sshd[7413]: Accepted publickey for core from 147.75.109.163 port 41612 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:59.305465 sshd[7413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:59.310908 systemd-logind[1952]: New session 22 of user core. Jul 7 00:00:59.315089 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:01:00.681578 sshd[7413]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:00.708179 systemd[1]: sshd@21-172.31.21.95:22-147.75.109.163:41612.service: Deactivated successfully. Jul 7 00:01:00.714803 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:01:00.719345 systemd-logind[1952]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:01:00.729629 systemd[1]: Started sshd@22-172.31.21.95:22-147.75.109.163:41620.service - OpenSSH per-connection server daemon (147.75.109.163:41620). Jul 7 00:01:00.731717 systemd-logind[1952]: Removed session 22. Jul 7 00:01:01.020282 sshd[7435]: Accepted publickey for core from 147.75.109.163 port 41620 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:01.023088 sshd[7435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:01.029251 systemd-logind[1952]: New session 23 of user core. Jul 7 00:01:01.036090 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:01:04.733482 sshd[7435]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:04.746356 systemd[1]: sshd@22-172.31.21.95:22-147.75.109.163:41620.service: Deactivated successfully. Jul 7 00:01:04.750017 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:01:04.768619 systemd-logind[1952]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:01:04.777712 systemd[1]: Started sshd@23-172.31.21.95:22-147.75.109.163:41630.service - OpenSSH per-connection server daemon (147.75.109.163:41630). Jul 7 00:01:04.781143 systemd-logind[1952]: Removed session 23. Jul 7 00:01:05.001365 sshd[7447]: Accepted publickey for core from 147.75.109.163 port 41630 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:05.018894 sshd[7447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:05.055033 systemd-logind[1952]: New session 24 of user core. Jul 7 00:01:05.060204 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:01:05.293824 sshd[7447]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:05.298934 systemd-logind[1952]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:01:05.300125 systemd[1]: sshd@23-172.31.21.95:22-147.75.109.163:41630.service: Deactivated successfully. Jul 7 00:01:05.303113 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:01:05.304957 systemd-logind[1952]: Removed session 24. Jul 7 00:01:10.337473 systemd[1]: Started sshd@24-172.31.21.95:22-147.75.109.163:52650.service - OpenSSH per-connection server daemon (147.75.109.163:52650). Jul 7 00:01:10.768076 sshd[7487]: Accepted publickey for core from 147.75.109.163 port 52650 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:10.776108 sshd[7487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:10.782500 systemd-logind[1952]: New session 25 of user core. Jul 7 00:01:10.788166 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:01:11.914000 sshd[7487]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:11.919680 systemd-logind[1952]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:01:11.920665 systemd[1]: sshd@24-172.31.21.95:22-147.75.109.163:52650.service: Deactivated successfully. Jul 7 00:01:11.923646 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:01:11.924782 systemd-logind[1952]: Removed session 25. Jul 7 00:01:15.553614 kubelet[3195]: I0707 00:01:15.553555 3195 scope.go:117] "RemoveContainer" containerID="06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382" Jul 7 00:01:15.746376 containerd[1975]: time="2025-07-07T00:01:15.719743320Z" level=info msg="RemoveContainer for \"06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382\"" Jul 7 00:01:15.916525 containerd[1975]: time="2025-07-07T00:01:15.916196621Z" level=info msg="RemoveContainer for \"06740adfad4d43f1b4009f1e47bfad7f0af7b260b446a1ce310b8590b4d52382\" returns successfully" Jul 7 00:01:15.917277 kubelet[3195]: I0707 00:01:15.916590 3195 scope.go:117] "RemoveContainer" containerID="32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8" Jul 7 00:01:15.920028 containerd[1975]: time="2025-07-07T00:01:15.919991107Z" level=info msg="RemoveContainer for \"32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8\"" Jul 7 00:01:15.931062 containerd[1975]: time="2025-07-07T00:01:15.931005491Z" level=info msg="RemoveContainer for \"32a32f66c74b402a6d414bf7b99eec571ec4837c45351dab364a885044da25e8\" returns successfully" Jul 7 00:01:15.947844 containerd[1975]: time="2025-07-07T00:01:15.947792395Z" level=info msg="StopPodSandbox for \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\"" Jul 7 00:01:17.010307 systemd[1]: Started sshd@25-172.31.21.95:22-147.75.109.163:36200.service - OpenSSH per-connection server daemon (147.75.109.163:36200). Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:16.512 [WARNING][7528] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:16.516 [INFO][7528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:16.516 [INFO][7528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" iface="eth0" netns="" Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:16.516 [INFO][7528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:16.516 [INFO][7528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:17.116 [INFO][7535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:17.120 [INFO][7535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:17.121 [INFO][7535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:17.186 [WARNING][7535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:17.187 [INFO][7535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:17.197 [INFO][7535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:17.270067 containerd[1975]: 2025-07-07 00:01:17.216 [INFO][7528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:01:17.270067 containerd[1975]: time="2025-07-07T00:01:17.270032749Z" level=info msg="TearDown network for sandbox \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\" successfully" Jul 7 00:01:17.270067 containerd[1975]: time="2025-07-07T00:01:17.270067503Z" level=info msg="StopPodSandbox for \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\" returns successfully" Jul 7 00:01:17.308133 containerd[1975]: time="2025-07-07T00:01:17.308089754Z" level=info msg="RemovePodSandbox for \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\"" Jul 7 00:01:17.324808 containerd[1975]: time="2025-07-07T00:01:17.324746096Z" level=info msg="Forcibly stopping sandbox \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\"" Jul 7 00:01:17.575769 sshd[7542]: Accepted publickey for core from 147.75.109.163 port 36200 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:17.581496 sshd[7542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:17.596700 systemd-logind[1952]: New session 26 of user core. Jul 7 00:01:17.602087 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.653 [WARNING][7552] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.657 [INFO][7552] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.657 [INFO][7552] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" iface="eth0" netns="" Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.657 [INFO][7552] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.657 [INFO][7552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.769 [INFO][7560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.770 [INFO][7560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.770 [INFO][7560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.785 [WARNING][7560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.785 [INFO][7560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" HandleID="k8s-pod-network.4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--hbf8l-eth0" Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.788 [INFO][7560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:17.798912 containerd[1975]: 2025-07-07 00:01:17.794 [INFO][7552] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643" Jul 7 00:01:17.803001 containerd[1975]: time="2025-07-07T00:01:17.799500676Z" level=info msg="TearDown network for sandbox \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\" successfully" Jul 7 00:01:17.834858 containerd[1975]: time="2025-07-07T00:01:17.834706853Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:17.834858 containerd[1975]: time="2025-07-07T00:01:17.834833123Z" level=info msg="RemovePodSandbox \"4df700b2827375cd7677d3de6652b4a62a2c4ea17afad5403f4008d749971643\" returns successfully" Jul 7 00:01:17.837545 containerd[1975]: time="2025-07-07T00:01:17.837251740Z" level=info msg="StopPodSandbox for \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\"" Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:17.915 [WARNING][7577] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:17.915 [INFO][7577] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:17.915 [INFO][7577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" iface="eth0" netns="" Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:17.915 [INFO][7577] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:17.915 [INFO][7577] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:17.973 [INFO][7585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:17.973 [INFO][7585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:17.973 [INFO][7585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:17.994 [WARNING][7585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:17.994 [INFO][7585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:17.997 [INFO][7585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:18.008102 containerd[1975]: 2025-07-07 00:01:18.000 [INFO][7577] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:01:18.008102 containerd[1975]: time="2025-07-07T00:01:18.003674395Z" level=info msg="TearDown network for sandbox \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\" successfully" Jul 7 00:01:18.008102 containerd[1975]: time="2025-07-07T00:01:18.003722519Z" level=info msg="StopPodSandbox for \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\" returns successfully" Jul 7 00:01:18.008102 containerd[1975]: time="2025-07-07T00:01:18.004337109Z" level=info msg="RemovePodSandbox for \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\"" Jul 7 00:01:18.008102 containerd[1975]: time="2025-07-07T00:01:18.004374005Z" level=info msg="Forcibly stopping sandbox \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\"" Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.093 [WARNING][7601] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" WorkloadEndpoint="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.094 [INFO][7601] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.094 [INFO][7601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" iface="eth0" netns="" Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.094 [INFO][7601] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.094 [INFO][7601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.158 [INFO][7611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.159 [INFO][7611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.159 [INFO][7611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.174 [WARNING][7611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.174 [INFO][7611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" HandleID="k8s-pod-network.a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Workload="ip--172--31--21--95-k8s-calico--apiserver--78dd578d87--r8llj-eth0" Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.178 [INFO][7611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:18.186906 containerd[1975]: 2025-07-07 00:01:18.182 [INFO][7601] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce" Jul 7 00:01:18.186906 containerd[1975]: time="2025-07-07T00:01:18.185155634Z" level=info msg="TearDown network for sandbox \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\" successfully" Jul 7 00:01:18.199433 containerd[1975]: time="2025-07-07T00:01:18.199338538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:18.199741 containerd[1975]: time="2025-07-07T00:01:18.199466192Z" level=info msg="RemovePodSandbox \"a2edd7cef072fb8ac9c9bb6c6ecc92de269fb96e0cfafc5fc813410e8f6b9fce\" returns successfully" Jul 7 00:01:19.259342 sshd[7542]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:19.272241 systemd[1]: sshd@25-172.31.21.95:22-147.75.109.163:36200.service: Deactivated successfully. Jul 7 00:01:19.280728 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:01:19.282385 systemd-logind[1952]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:01:19.289004 systemd-logind[1952]: Removed session 26. Jul 7 00:01:24.316237 systemd[1]: Started sshd@26-172.31.21.95:22-147.75.109.163:36214.service - OpenSSH per-connection server daemon (147.75.109.163:36214). Jul 7 00:01:24.634068 sshd[7642]: Accepted publickey for core from 147.75.109.163 port 36214 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:24.638213 sshd[7642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:24.650329 systemd-logind[1952]: New session 27 of user core. Jul 7 00:01:24.656178 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 00:01:26.030856 sshd[7642]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:26.037840 systemd[1]: sshd@26-172.31.21.95:22-147.75.109.163:36214.service: Deactivated successfully. Jul 7 00:01:26.043016 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 00:01:26.044969 systemd-logind[1952]: Session 27 logged out. Waiting for processes to exit. Jul 7 00:01:26.047822 systemd-logind[1952]: Removed session 27. Jul 7 00:01:31.086636 systemd[1]: Started sshd@27-172.31.21.95:22-147.75.109.163:46580.service - OpenSSH per-connection server daemon (147.75.109.163:46580). Jul 7 00:01:31.115207 update_engine[1954]: I20250707 00:01:31.115134 1954 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 00:01:31.120689 update_engine[1954]: I20250707 00:01:31.116205 1954 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 00:01:31.120689 update_engine[1954]: I20250707 00:01:31.120301 1954 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 00:01:31.123461 update_engine[1954]: I20250707 00:01:31.123042 1954 omaha_request_params.cc:62] Current group set to lts Jul 7 00:01:31.123461 update_engine[1954]: I20250707 00:01:31.123248 1954 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 00:01:31.123461 update_engine[1954]: I20250707 00:01:31.123261 1954 update_attempter.cc:643] Scheduling an action processor start. Jul 7 00:01:31.123461 update_engine[1954]: I20250707 00:01:31.123286 1954 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:01:31.123461 update_engine[1954]: I20250707 00:01:31.123334 1954 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 00:01:31.140399 update_engine[1954]: I20250707 00:01:31.139300 1954 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:01:31.140399 update_engine[1954]: I20250707 00:01:31.139354 1954 omaha_request_action.cc:272] Request: Jul 7 00:01:31.140399 update_engine[1954]: Jul 7 00:01:31.140399 update_engine[1954]: Jul 7 00:01:31.140399 update_engine[1954]: Jul 7 00:01:31.140399 update_engine[1954]: Jul 7 00:01:31.140399 update_engine[1954]: Jul 7 00:01:31.140399 update_engine[1954]: Jul 7 00:01:31.140399 update_engine[1954]: Jul 7 00:01:31.140399 update_engine[1954]: Jul 7 00:01:31.140399 update_engine[1954]: I20250707 00:01:31.139364 1954 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:01:31.167230 locksmithd[2003]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 00:01:31.177109 update_engine[1954]: I20250707 00:01:31.174911 1954 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:01:31.181083 update_engine[1954]: I20250707 00:01:31.175267 1954 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:01:31.191104 update_engine[1954]: E20250707 00:01:31.191031 1954 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:01:31.192195 update_engine[1954]: I20250707 00:01:31.191157 1954 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 00:01:31.458418 sshd[7689]: Accepted publickey for core from 147.75.109.163 port 46580 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:31.463664 sshd[7689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:31.473354 systemd-logind[1952]: New session 28 of user core. Jul 7 00:01:31.481179 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 00:01:32.765994 sshd[7689]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:32.770211 systemd-logind[1952]: Session 28 logged out. Waiting for processes to exit. Jul 7 00:01:32.771643 systemd[1]: sshd@27-172.31.21.95:22-147.75.109.163:46580.service: Deactivated successfully. Jul 7 00:01:32.774813 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 00:01:32.775986 systemd-logind[1952]: Removed session 28. Jul 7 00:01:41.016584 update_engine[1954]: I20250707 00:01:41.016423 1954 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:01:41.017284 update_engine[1954]: I20250707 00:01:41.017254 1954 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:01:41.017514 update_engine[1954]: I20250707 00:01:41.017486 1954 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:01:41.017887 update_engine[1954]: E20250707 00:01:41.017841 1954 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:01:41.017923 update_engine[1954]: I20250707 00:01:41.017905 1954 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 00:01:46.766363 systemd[1]: cri-containerd-0348eedc37240641ad754b7152e6045467f61b3495910025147efe8a2f5a898e.scope: Deactivated successfully. Jul 7 00:01:46.766924 systemd[1]: cri-containerd-0348eedc37240641ad754b7152e6045467f61b3495910025147efe8a2f5a898e.scope: Consumed 4.739s CPU time, 46.4M memory peak, 0B memory swap peak. Jul 7 00:01:46.886466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0348eedc37240641ad754b7152e6045467f61b3495910025147efe8a2f5a898e-rootfs.mount: Deactivated successfully. Jul 7 00:01:46.918510 systemd[1]: cri-containerd-bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9.scope: Deactivated successfully. Jul 7 00:01:46.918823 systemd[1]: cri-containerd-bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9.scope: Consumed 12.211s CPU time. Jul 7 00:01:46.972700 containerd[1975]: time="2025-07-07T00:01:46.910832213Z" level=info msg="shim disconnected" id=0348eedc37240641ad754b7152e6045467f61b3495910025147efe8a2f5a898e namespace=k8s.io Jul 7 00:01:46.982175 containerd[1975]: time="2025-07-07T00:01:46.973197984Z" level=warning msg="cleaning up after shim disconnected" id=0348eedc37240641ad754b7152e6045467f61b3495910025147efe8a2f5a898e namespace=k8s.io Jul 7 00:01:46.982175 containerd[1975]: time="2025-07-07T00:01:46.973231427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:01:46.982175 containerd[1975]: time="2025-07-07T00:01:46.981221964Z" level=info msg="shim disconnected" id=bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9 namespace=k8s.io Jul 7 00:01:46.982175 containerd[1975]: time="2025-07-07T00:01:46.981288497Z" level=warning msg="cleaning up after shim disconnected" id=bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9 namespace=k8s.io Jul 7 00:01:46.982175 containerd[1975]: time="2025-07-07T00:01:46.981300160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:01:46.990175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9-rootfs.mount: Deactivated successfully. Jul 7 00:01:47.010525 kubelet[3195]: E0707 00:01:47.010454 3195 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-95?timeout=10s\": context deadline exceeded" Jul 7 00:01:47.111078 containerd[1975]: time="2025-07-07T00:01:47.110707184Z" level=warning msg="cleanup warnings time=\"2025-07-07T00:01:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 00:01:47.637835 kubelet[3195]: I0707 00:01:47.634248 3195 scope.go:117] "RemoveContainer" containerID="0348eedc37240641ad754b7152e6045467f61b3495910025147efe8a2f5a898e" Jul 7 00:01:47.642016 kubelet[3195]: I0707 00:01:47.641857 3195 scope.go:117] "RemoveContainer" containerID="bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9" Jul 7 00:01:47.694268 containerd[1975]: time="2025-07-07T00:01:47.694056348Z" level=info msg="CreateContainer within sandbox \"c6c66a28d33dee1df04912bfd5dee50366f40ee6a45a4b772a78321061bc563e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 7 00:01:47.695792 containerd[1975]: time="2025-07-07T00:01:47.695016363Z" level=info msg="CreateContainer within sandbox \"76d40783b1e6e0466a27bb21660c4753552ded53ea8b11bd370cf91c8dc46b2c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 00:01:47.788728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798756070.mount: Deactivated successfully. Jul 7 00:01:47.821117 containerd[1975]: time="2025-07-07T00:01:47.820504610Z" level=info msg="CreateContainer within sandbox \"76d40783b1e6e0466a27bb21660c4753552ded53ea8b11bd370cf91c8dc46b2c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2662f8c46c394ee7b43c84353757f74b4330d4c48fc38bc5d3d9fa4d75057542\"" Jul 7 00:01:47.824733 containerd[1975]: time="2025-07-07T00:01:47.824182547Z" level=info msg="StartContainer for \"2662f8c46c394ee7b43c84353757f74b4330d4c48fc38bc5d3d9fa4d75057542\"" Jul 7 00:01:47.826734 containerd[1975]: time="2025-07-07T00:01:47.826685211Z" level=info msg="CreateContainer within sandbox \"c6c66a28d33dee1df04912bfd5dee50366f40ee6a45a4b772a78321061bc563e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2f6bc071251f33ced452fe12c8aaccdd8c2c0ea17a53dc95c329ad2ef891209e\"" Jul 7 00:01:47.829092 containerd[1975]: time="2025-07-07T00:01:47.829060808Z" level=info msg="StartContainer for \"2f6bc071251f33ced452fe12c8aaccdd8c2c0ea17a53dc95c329ad2ef891209e\"" Jul 7 00:01:47.880411 systemd[1]: Started cri-containerd-2f6bc071251f33ced452fe12c8aaccdd8c2c0ea17a53dc95c329ad2ef891209e.scope - libcontainer container 2f6bc071251f33ced452fe12c8aaccdd8c2c0ea17a53dc95c329ad2ef891209e. Jul 7 00:01:47.897502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2807804416.mount: Deactivated successfully. Jul 7 00:01:47.909273 systemd[1]: Started cri-containerd-2662f8c46c394ee7b43c84353757f74b4330d4c48fc38bc5d3d9fa4d75057542.scope - libcontainer container 2662f8c46c394ee7b43c84353757f74b4330d4c48fc38bc5d3d9fa4d75057542. Jul 7 00:01:47.951817 containerd[1975]: time="2025-07-07T00:01:47.951771314Z" level=info msg="StartContainer for \"2f6bc071251f33ced452fe12c8aaccdd8c2c0ea17a53dc95c329ad2ef891209e\" returns successfully" Jul 7 00:01:48.006570 containerd[1975]: time="2025-07-07T00:01:48.006444814Z" level=info msg="StartContainer for \"2662f8c46c394ee7b43c84353757f74b4330d4c48fc38bc5d3d9fa4d75057542\" returns successfully" Jul 7 00:01:51.013905 update_engine[1954]: I20250707 00:01:51.013811 1954 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:01:51.014403 update_engine[1954]: I20250707 00:01:51.014098 1954 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:01:51.014403 update_engine[1954]: I20250707 00:01:51.014324 1954 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:01:51.015861 update_engine[1954]: E20250707 00:01:51.014859 1954 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:01:51.015861 update_engine[1954]: I20250707 00:01:51.014954 1954 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 00:01:51.302213 systemd[1]: cri-containerd-b57563c6b9d8aac39c6d410237675f12ffb3e59c350281dfa0e357d64351e8f5.scope: Deactivated successfully. Jul 7 00:01:51.302531 systemd[1]: cri-containerd-b57563c6b9d8aac39c6d410237675f12ffb3e59c350281dfa0e357d64351e8f5.scope: Consumed 3.750s CPU time, 24.6M memory peak, 0B memory swap peak. Jul 7 00:01:51.335822 containerd[1975]: time="2025-07-07T00:01:51.335723949Z" level=info msg="shim disconnected" id=b57563c6b9d8aac39c6d410237675f12ffb3e59c350281dfa0e357d64351e8f5 namespace=k8s.io Jul 7 00:01:51.337347 containerd[1975]: time="2025-07-07T00:01:51.335836637Z" level=warning msg="cleaning up after shim disconnected" id=b57563c6b9d8aac39c6d410237675f12ffb3e59c350281dfa0e357d64351e8f5 namespace=k8s.io Jul 7 00:01:51.337347 containerd[1975]: time="2025-07-07T00:01:51.335850772Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:01:51.340248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b57563c6b9d8aac39c6d410237675f12ffb3e59c350281dfa0e357d64351e8f5-rootfs.mount: Deactivated successfully. Jul 7 00:01:51.364269 containerd[1975]: time="2025-07-07T00:01:51.364222211Z" level=warning msg="cleanup warnings time=\"2025-07-07T00:01:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 00:01:51.638455 kubelet[3195]: I0707 00:01:51.638357 3195 scope.go:117] "RemoveContainer" containerID="b57563c6b9d8aac39c6d410237675f12ffb3e59c350281dfa0e357d64351e8f5" Jul 7 00:01:51.641115 containerd[1975]: time="2025-07-07T00:01:51.641071379Z" level=info msg="CreateContainer within sandbox \"a476558a506a617c3283d244d365c6d85e86dbd265997d0a4eb89c114f841ea2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 00:01:51.669130 containerd[1975]: time="2025-07-07T00:01:51.669073347Z" level=info msg="CreateContainer within sandbox \"a476558a506a617c3283d244d365c6d85e86dbd265997d0a4eb89c114f841ea2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d676701cc19e3bf400116b84e911a96f0f4cb7b661ddcf575a5ce6e3298bae56\"" Jul 7 00:01:51.670072 containerd[1975]: time="2025-07-07T00:01:51.669624136Z" level=info msg="StartContainer for \"d676701cc19e3bf400116b84e911a96f0f4cb7b661ddcf575a5ce6e3298bae56\"" Jul 7 00:01:51.716419 systemd[1]: Started cri-containerd-d676701cc19e3bf400116b84e911a96f0f4cb7b661ddcf575a5ce6e3298bae56.scope - libcontainer container d676701cc19e3bf400116b84e911a96f0f4cb7b661ddcf575a5ce6e3298bae56. Jul 7 00:01:51.775378 containerd[1975]: time="2025-07-07T00:01:51.775335416Z" level=info msg="StartContainer for \"d676701cc19e3bf400116b84e911a96f0f4cb7b661ddcf575a5ce6e3298bae56\" returns successfully" Jul 7 00:01:55.569762 systemd[1]: run-containerd-runc-k8s.io-a502d555597e2e9a86af10b58804dac453d2cb64843640a5386627d9c8b4ad76-runc.UNO11e.mount: Deactivated successfully. Jul 7 00:01:57.017534 kubelet[3195]: E0707 00:01:57.017036 3195 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-95?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 7 00:01:57.751679 systemd[1]: run-containerd-runc-k8s.io-6348f01b8dda969eb1dc481abda4da1994549b11062e2d70feb15b271c6022c2-runc.KUSZE2.mount: Deactivated successfully. Jul 7 00:01:59.613301 systemd[1]: cri-containerd-2f6bc071251f33ced452fe12c8aaccdd8c2c0ea17a53dc95c329ad2ef891209e.scope: Deactivated successfully. Jul 7 00:01:59.649137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f6bc071251f33ced452fe12c8aaccdd8c2c0ea17a53dc95c329ad2ef891209e-rootfs.mount: Deactivated successfully. Jul 7 00:01:59.667819 containerd[1975]: time="2025-07-07T00:01:59.667742760Z" level=info msg="shim disconnected" id=2f6bc071251f33ced452fe12c8aaccdd8c2c0ea17a53dc95c329ad2ef891209e namespace=k8s.io Jul 7 00:01:59.667819 containerd[1975]: time="2025-07-07T00:01:59.667804944Z" level=warning msg="cleaning up after shim disconnected" id=2f6bc071251f33ced452fe12c8aaccdd8c2c0ea17a53dc95c329ad2ef891209e namespace=k8s.io Jul 7 00:01:59.667819 containerd[1975]: time="2025-07-07T00:01:59.667813848Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:00.675677 kubelet[3195]: I0707 00:02:00.675627 3195 scope.go:117] "RemoveContainer" containerID="bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9" Jul 7 00:02:00.676232 kubelet[3195]: I0707 00:02:00.675858 3195 scope.go:117] "RemoveContainer" containerID="2f6bc071251f33ced452fe12c8aaccdd8c2c0ea17a53dc95c329ad2ef891209e" Jul 7 00:02:00.698380 kubelet[3195]: E0707 00:02:00.687205 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-5kdp2_tigera-operator(957afe95-7151-49bd-838b-f19b3008db34)\"" pod="tigera-operator/tigera-operator-747864d56d-5kdp2" podUID="957afe95-7151-49bd-838b-f19b3008db34" Jul 7 00:02:00.707968 containerd[1975]: time="2025-07-07T00:02:00.707913146Z" level=info msg="RemoveContainer for \"bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9\"" Jul 7 00:02:00.719158 containerd[1975]: time="2025-07-07T00:02:00.719064015Z" level=info msg="RemoveContainer for \"bf813852e2590429a328c7ec5b61b2621c0ef9d60fcdca6a0cdbb096f67caef9\" returns successfully" Jul 7 00:02:01.012157 update_engine[1954]: I20250707 00:02:01.011776 1954 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:02:01.012613 update_engine[1954]: I20250707 00:02:01.012359 1954 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:02:01.012760 update_engine[1954]: I20250707 00:02:01.012634 1954 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:02:01.013288 update_engine[1954]: E20250707 00:02:01.013254 1954 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:02:01.013444 update_engine[1954]: I20250707 00:02:01.013424 1954 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:02:01.013554 update_engine[1954]: I20250707 00:02:01.013535 1954 omaha_request_action.cc:617] Omaha request response: Jul 7 00:02:01.014216 update_engine[1954]: E20250707 00:02:01.014178 1954 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 00:02:01.044505 update_engine[1954]: I20250707 00:02:01.043998 1954 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 00:02:01.044505 update_engine[1954]: I20250707 00:02:01.044055 1954 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:02:01.044505 update_engine[1954]: I20250707 00:02:01.044066 1954 update_attempter.cc:306] Processing Done. Jul 7 00:02:01.044505 update_engine[1954]: E20250707 00:02:01.044088 1954 update_attempter.cc:619] Update failed. Jul 7 00:02:01.049078 update_engine[1954]: I20250707 00:02:01.048987 1954 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 00:02:01.049078 update_engine[1954]: I20250707 00:02:01.049052 1954 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 00:02:01.049078 update_engine[1954]: I20250707 00:02:01.049065 1954 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 00:02:01.049900 update_engine[1954]: I20250707 00:02:01.049180 1954 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:02:01.049900 update_engine[1954]: I20250707 00:02:01.049214 1954 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:02:01.049900 update_engine[1954]: I20250707 00:02:01.049221 1954 omaha_request_action.cc:272] Request: Jul 7 00:02:01.049900 update_engine[1954]: Jul 7 00:02:01.049900 update_engine[1954]: Jul 7 00:02:01.049900 update_engine[1954]: Jul 7 00:02:01.049900 update_engine[1954]: Jul 7 00:02:01.049900 update_engine[1954]: Jul 7 00:02:01.049900 update_engine[1954]: Jul 7 00:02:01.049900 update_engine[1954]: I20250707 00:02:01.049230 1954 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:02:01.049900 update_engine[1954]: I20250707 00:02:01.049436 1954 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:02:01.049900 update_engine[1954]: I20250707 00:02:01.049649 1954 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:02:01.051839 update_engine[1954]: E20250707 00:02:01.050047 1954 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:02:01.051839 update_engine[1954]: I20250707 00:02:01.050110 1954 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:02:01.051839 update_engine[1954]: I20250707 00:02:01.050122 1954 omaha_request_action.cc:617] Omaha request response: Jul 7 00:02:01.051839 update_engine[1954]: I20250707 00:02:01.050132 1954 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:02:01.051839 update_engine[1954]: I20250707 00:02:01.050140 1954 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:02:01.051839 update_engine[1954]: I20250707 00:02:01.050149 1954 update_attempter.cc:306] Processing Done. Jul 7 00:02:01.051839 update_engine[1954]: I20250707 00:02:01.050159 1954 update_attempter.cc:310] Error event sent. Jul 7 00:02:01.051839 update_engine[1954]: I20250707 00:02:01.050179 1954 update_check_scheduler.cc:74] Next update check in 42m35s Jul 7 00:02:01.067929 locksmithd[2003]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 00:02:01.067929 locksmithd[2003]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 00:02:07.034362 kubelet[3195]: E0707 00:02:07.033743 3195 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-95?timeout=10s\": context deadline exceeded"