Jul 6 23:59:03.726213 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:59:03.726253 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:03.726273 kernel: BIOS-provided physical RAM map: Jul 6 23:59:03.726286 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:59:03.726297 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 6 23:59:03.726309 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jul 6 23:59:03.726323 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jul 6 23:59:03.726336 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 6 23:59:03.726348 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 6 23:59:03.726363 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 6 23:59:03.726376 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 6 23:59:03.726388 kernel: NX (Execute Disable) protection: active Jul 6 23:59:03.726400 kernel: APIC: Static calls initialized Jul 6 23:59:03.726413 kernel: efi: EFI v2.7 by EDK II Jul 6 23:59:03.726429 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 6 23:59:03.726446 kernel: SMBIOS 2.7 present. Jul 6 23:59:03.726459 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 6 23:59:03.726472 kernel: Hypervisor detected: KVM Jul 6 23:59:03.726486 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:59:03.726500 kernel: kvm-clock: using sched offset of 4366663170 cycles Jul 6 23:59:03.726514 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:59:03.726528 kernel: tsc: Detected 2499.996 MHz processor Jul 6 23:59:03.726542 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:59:03.727595 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:59:03.727612 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 6 23:59:03.727632 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:59:03.727646 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:59:03.727660 kernel: Using GB pages for direct mapping Jul 6 23:59:03.727674 kernel: Secure boot disabled Jul 6 23:59:03.727688 kernel: ACPI: Early table checksum verification disabled Jul 6 23:59:03.727702 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 6 23:59:03.727716 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 6 23:59:03.727730 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 6 23:59:03.727744 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 6 23:59:03.727762 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 6 23:59:03.727776 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 6 23:59:03.727790 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 6 23:59:03.727803 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 6 23:59:03.727818 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 6 23:59:03.727832 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 6 23:59:03.727852 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 6 23:59:03.727874 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 6 23:59:03.727888 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 6 23:59:03.727903 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 6 23:59:03.727918 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 6 23:59:03.727933 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 6 23:59:03.727948 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 6 23:59:03.727966 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 6 23:59:03.727980 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 6 23:59:03.727993 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 6 23:59:03.728004 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 6 23:59:03.728016 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 6 23:59:03.728028 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 6 23:59:03.728040 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 6 23:59:03.728052 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:59:03.728064 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:59:03.728076 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 6 23:59:03.728092 kernel: NUMA: Initialized distance table, cnt=1 Jul 6 23:59:03.728104 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Jul 6 23:59:03.728116 kernel: Zone ranges: Jul 6 23:59:03.728129 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:59:03.728143 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 6 23:59:03.728157 kernel: Normal empty Jul 6 23:59:03.728170 kernel: Movable zone start for each node Jul 6 23:59:03.728185 kernel: Early memory node ranges Jul 6 23:59:03.728199 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:59:03.728216 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 6 23:59:03.728232 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 6 23:59:03.728247 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 6 23:59:03.728263 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:59:03.728276 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:59:03.728289 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 6 23:59:03.728303 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 6 23:59:03.728316 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 6 23:59:03.728331 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:59:03.728348 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 6 23:59:03.728361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:59:03.728376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:59:03.728390 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:59:03.728404 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:59:03.728417 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:59:03.728431 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:59:03.728444 kernel: TSC deadline timer available Jul 6 23:59:03.728458 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:59:03.728475 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:59:03.728490 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 6 23:59:03.728504 kernel: Booting paravirtualized kernel on KVM Jul 6 23:59:03.728518 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:59:03.728533 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:59:03.729625 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:59:03.729644 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:59:03.729658 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:59:03.729671 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:59:03.729690 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:59:03.729707 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:03.729721 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:59:03.729734 kernel: random: crng init done Jul 6 23:59:03.729748 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:59:03.729761 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:59:03.729776 kernel: Fallback order for Node 0: 0 Jul 6 23:59:03.729789 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jul 6 23:59:03.729806 kernel: Policy zone: DMA32 Jul 6 23:59:03.729819 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:59:03.729834 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 162936K reserved, 0K cma-reserved) Jul 6 23:59:03.729848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:59:03.729862 kernel: Kernel/User page tables isolation: enabled Jul 6 23:59:03.729877 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:59:03.729891 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:59:03.729904 kernel: Dynamic Preempt: voluntary Jul 6 23:59:03.729918 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:59:03.729936 kernel: rcu: RCU event tracing is enabled. Jul 6 23:59:03.729950 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:59:03.729965 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:59:03.729979 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:59:03.729992 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:59:03.730007 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:59:03.730021 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:59:03.730035 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 6 23:59:03.730064 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:59:03.730079 kernel: Console: colour dummy device 80x25 Jul 6 23:59:03.730094 kernel: printk: console [tty0] enabled Jul 6 23:59:03.730109 kernel: printk: console [ttyS0] enabled Jul 6 23:59:03.730127 kernel: ACPI: Core revision 20230628 Jul 6 23:59:03.730142 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 6 23:59:03.730158 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:59:03.730172 kernel: x2apic enabled Jul 6 23:59:03.730187 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:59:03.730203 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 6 23:59:03.730222 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jul 6 23:59:03.730238 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:59:03.730253 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:59:03.730269 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:59:03.730284 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:59:03.730299 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:59:03.730314 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 6 23:59:03.730329 kernel: RETBleed: Vulnerable Jul 6 23:59:03.730347 kernel: Speculative Store Bypass: Vulnerable Jul 6 23:59:03.730362 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:59:03.730377 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:59:03.730393 kernel: GDS: Unknown: Dependent on hypervisor status Jul 6 23:59:03.730408 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:59:03.730423 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:59:03.730438 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:59:03.730454 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:59:03.730469 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 6 23:59:03.730485 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 6 23:59:03.730500 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 6 23:59:03.730518 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 6 23:59:03.730534 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 6 23:59:03.731587 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 6 23:59:03.731612 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:59:03.731628 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 6 23:59:03.731644 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 6 23:59:03.731660 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 6 23:59:03.731675 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 6 23:59:03.731690 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 6 23:59:03.731706 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 6 23:59:03.731722 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 6 23:59:03.731738 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:59:03.731758 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:59:03.731773 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:59:03.731789 kernel: landlock: Up and running. Jul 6 23:59:03.731804 kernel: SELinux: Initializing. Jul 6 23:59:03.731820 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:59:03.731835 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:59:03.731850 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 6 23:59:03.731866 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:59:03.731882 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:59:03.731898 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:59:03.731916 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 6 23:59:03.731931 kernel: signal: max sigframe size: 3632 Jul 6 23:59:03.731948 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:59:03.731964 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:59:03.731980 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:59:03.731995 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:59:03.732010 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:59:03.732026 kernel: .... node #0, CPUs: #1 Jul 6 23:59:03.732043 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 6 23:59:03.732062 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 6 23:59:03.732078 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:59:03.732094 kernel: smpboot: Max logical packages: 1 Jul 6 23:59:03.732109 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jul 6 23:59:03.732125 kernel: devtmpfs: initialized Jul 6 23:59:03.732140 kernel: x86/mm: Memory block size: 128MB Jul 6 23:59:03.732155 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 6 23:59:03.732171 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:59:03.732186 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:59:03.732205 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:59:03.732220 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:59:03.732236 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:59:03.732251 kernel: audit: type=2000 audit(1751846341.965:1): state=initialized audit_enabled=0 res=1 Jul 6 23:59:03.732266 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:59:03.732281 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:59:03.732296 kernel: cpuidle: using governor menu Jul 6 23:59:03.732311 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:59:03.732325 kernel: dca service started, version 1.12.1 Jul 6 23:59:03.732342 kernel: PCI: Using configuration type 1 for base access Jul 6 23:59:03.732359 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:59:03.732374 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:59:03.732394 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:59:03.732411 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:59:03.732431 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:59:03.732445 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:59:03.732458 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:59:03.732474 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:59:03.732492 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 6 23:59:03.732506 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:59:03.732520 kernel: ACPI: Interpreter enabled Jul 6 23:59:03.732533 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:59:03.736636 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:59:03.736659 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:59:03.736676 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:59:03.736692 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 6 23:59:03.736709 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:59:03.736999 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:59:03.737152 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 6 23:59:03.737289 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 6 23:59:03.737311 kernel: acpiphp: Slot [3] registered Jul 6 23:59:03.737328 kernel: acpiphp: Slot [4] registered Jul 6 23:59:03.737345 kernel: acpiphp: Slot [5] registered Jul 6 23:59:03.737363 kernel: acpiphp: Slot [6] registered Jul 6 23:59:03.737384 kernel: acpiphp: Slot [7] registered Jul 6 23:59:03.737401 kernel: acpiphp: Slot [8] registered Jul 6 23:59:03.737417 kernel: acpiphp: Slot [9] registered Jul 6 23:59:03.737434 kernel: acpiphp: Slot [10] registered Jul 6 23:59:03.737451 kernel: acpiphp: Slot [11] registered Jul 6 23:59:03.737468 kernel: acpiphp: Slot [12] registered Jul 6 23:59:03.737485 kernel: acpiphp: Slot [13] registered Jul 6 23:59:03.737501 kernel: acpiphp: Slot [14] registered Jul 6 23:59:03.737518 kernel: acpiphp: Slot [15] registered Jul 6 23:59:03.737538 kernel: acpiphp: Slot [16] registered Jul 6 23:59:03.737577 kernel: acpiphp: Slot [17] registered Jul 6 23:59:03.737595 kernel: acpiphp: Slot [18] registered Jul 6 23:59:03.737611 kernel: acpiphp: Slot [19] registered Jul 6 23:59:03.737628 kernel: acpiphp: Slot [20] registered Jul 6 23:59:03.737645 kernel: acpiphp: Slot [21] registered Jul 6 23:59:03.737661 kernel: acpiphp: Slot [22] registered Jul 6 23:59:03.737678 kernel: acpiphp: Slot [23] registered Jul 6 23:59:03.737695 kernel: acpiphp: Slot [24] registered Jul 6 23:59:03.737712 kernel: acpiphp: Slot [25] registered Jul 6 23:59:03.737733 kernel: acpiphp: Slot [26] registered Jul 6 23:59:03.737750 kernel: acpiphp: Slot [27] registered Jul 6 23:59:03.737767 kernel: acpiphp: Slot [28] registered Jul 6 23:59:03.737784 kernel: acpiphp: Slot [29] registered Jul 6 23:59:03.737801 kernel: acpiphp: Slot [30] registered Jul 6 23:59:03.737818 kernel: acpiphp: Slot [31] registered Jul 6 23:59:03.737835 kernel: PCI host bridge to bus 0000:00 Jul 6 23:59:03.737989 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:59:03.738128 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:59:03.738259 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:59:03.738382 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 6 23:59:03.738509 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 6 23:59:03.738655 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:59:03.738858 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 6 23:59:03.739009 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 6 23:59:03.739161 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jul 6 23:59:03.739297 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 6 23:59:03.739431 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 6 23:59:03.740648 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 6 23:59:03.740869 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 6 23:59:03.741050 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 6 23:59:03.741197 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 6 23:59:03.741342 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 6 23:59:03.741495 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jul 6 23:59:03.741649 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jul 6 23:59:03.741788 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 6 23:59:03.741925 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jul 6 23:59:03.742067 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:59:03.742214 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 6 23:59:03.742366 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jul 6 23:59:03.742518 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 6 23:59:03.744858 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jul 6 23:59:03.744914 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:59:03.744935 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:59:03.744955 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:59:03.744976 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:59:03.745005 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 6 23:59:03.745025 kernel: iommu: Default domain type: Translated Jul 6 23:59:03.745045 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:59:03.745064 kernel: efivars: Registered efivars operations Jul 6 23:59:03.745084 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:59:03.745104 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:59:03.745124 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 6 23:59:03.745143 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 6 23:59:03.745321 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 6 23:59:03.745495 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 6 23:59:03.745677 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:59:03.745698 kernel: vgaarb: loaded Jul 6 23:59:03.745716 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 6 23:59:03.745733 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 6 23:59:03.745749 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:59:03.745764 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:59:03.745778 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:59:03.745800 kernel: pnp: PnP ACPI init Jul 6 23:59:03.745816 kernel: pnp: PnP ACPI: found 5 devices Jul 6 23:59:03.745832 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:59:03.745848 kernel: NET: Registered PF_INET protocol family Jul 6 23:59:03.745864 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:59:03.745878 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:59:03.745893 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:59:03.745909 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:59:03.745925 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:59:03.745944 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:59:03.745959 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:59:03.745975 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:59:03.745991 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:59:03.746006 kernel: NET: Registered PF_XDP protocol family Jul 6 23:59:03.746138 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:59:03.746259 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:59:03.746376 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:59:03.746494 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 6 23:59:03.748102 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 6 23:59:03.748340 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:59:03.748365 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:59:03.748382 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:59:03.748396 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 6 23:59:03.748417 kernel: clocksource: Switched to clocksource tsc Jul 6 23:59:03.748436 kernel: Initialise system trusted keyrings Jul 6 23:59:03.748450 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:59:03.748472 kernel: Key type asymmetric registered Jul 6 23:59:03.748487 kernel: Asymmetric key parser 'x509' registered Jul 6 23:59:03.748502 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:59:03.748518 kernel: io scheduler mq-deadline registered Jul 6 23:59:03.748531 kernel: io scheduler kyber registered Jul 6 23:59:03.748584 kernel: io scheduler bfq registered Jul 6 23:59:03.748598 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:59:03.748613 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:59:03.748630 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:59:03.748653 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:59:03.748670 kernel: i8042: Warning: Keylock active Jul 6 23:59:03.748687 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:59:03.748704 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:59:03.748876 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 6 23:59:03.749012 kernel: rtc_cmos 00:00: registered as rtc0 Jul 6 23:59:03.749144 kernel: rtc_cmos 00:00: setting system clock to 2025-07-06T23:59:02 UTC (1751846342) Jul 6 23:59:03.749295 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 6 23:59:03.749321 kernel: intel_pstate: CPU model not supported Jul 6 23:59:03.749338 kernel: efifb: probing for efifb Jul 6 23:59:03.749355 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jul 6 23:59:03.749372 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 6 23:59:03.749388 kernel: efifb: scrolling: redraw Jul 6 23:59:03.749405 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:59:03.749421 kernel: Console: switching to colour frame buffer device 100x37 Jul 6 23:59:03.749438 kernel: fb0: EFI VGA frame buffer device Jul 6 23:59:03.749454 kernel: pstore: Using crash dump compression: deflate Jul 6 23:59:03.749474 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:59:03.749491 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:59:03.749507 kernel: Segment Routing with IPv6 Jul 6 23:59:03.749523 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:59:03.749540 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:59:03.749576 kernel: Key type dns_resolver registered Jul 6 23:59:03.749591 kernel: IPI shorthand broadcast: enabled Jul 6 23:59:03.749632 kernel: sched_clock: Marking stable (1078008422, 328113900)->(1770352530, -364230208) Jul 6 23:59:03.749653 kernel: registered taskstats version 1 Jul 6 23:59:03.749673 kernel: Loading compiled-in X.509 certificates Jul 6 23:59:03.749691 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:59:03.749709 kernel: Key type .fscrypt registered Jul 6 23:59:03.749726 kernel: Key type fscrypt-provisioning registered Jul 6 23:59:03.749744 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:59:03.749761 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:59:03.749779 kernel: ima: No architecture policies found Jul 6 23:59:03.749797 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:59:03.749818 kernel: clk: Disabling unused clocks Jul 6 23:59:03.749835 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:59:03.749853 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:59:03.749871 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:59:03.749888 kernel: Run /init as init process Jul 6 23:59:03.749906 kernel: with arguments: Jul 6 23:59:03.749923 kernel: /init Jul 6 23:59:03.749940 kernel: with environment: Jul 6 23:59:03.749957 kernel: HOME=/ Jul 6 23:59:03.749974 kernel: TERM=linux Jul 6 23:59:03.749994 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:59:03.750016 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:59:03.750037 systemd[1]: Detected virtualization amazon. Jul 6 23:59:03.750055 systemd[1]: Detected architecture x86-64. Jul 6 23:59:03.750072 systemd[1]: Running in initrd. Jul 6 23:59:03.750090 systemd[1]: No hostname configured, using default hostname. Jul 6 23:59:03.750107 systemd[1]: Hostname set to . Jul 6 23:59:03.750130 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:59:03.750147 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:59:03.750167 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:59:03.750185 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:59:03.750205 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:59:03.750224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:59:03.750242 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:59:03.750264 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:59:03.750285 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:59:03.750304 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:59:03.750322 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:59:03.750344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:59:03.750362 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:59:03.750382 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:59:03.750400 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:59:03.750418 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:59:03.750436 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:59:03.750454 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:59:03.750473 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:59:03.750491 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:59:03.750512 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:59:03.750531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:59:03.750583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:59:03.750603 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:59:03.750622 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:59:03.750640 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:59:03.750658 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:59:03.750677 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:59:03.750699 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:59:03.750716 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:59:03.750735 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:03.750754 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:59:03.750772 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:59:03.750824 systemd-journald[178]: Collecting audit messages is disabled. Jul 6 23:59:03.750868 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:59:03.750888 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:59:03.750909 systemd-journald[178]: Journal started Jul 6 23:59:03.750950 systemd-journald[178]: Runtime Journal (/run/log/journal/ec24d2483cece4fc9ebfc89f7e8f60eb) is 4.7M, max 38.2M, 33.4M free. Jul 6 23:59:03.742608 systemd-modules-load[179]: Inserted module 'overlay' Jul 6 23:59:03.754868 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:59:03.763710 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:03.771794 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:03.776727 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:59:03.790862 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:59:03.801799 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:59:03.801849 kernel: Bridge firewalling registered Jul 6 23:59:03.802654 systemd-modules-load[179]: Inserted module 'br_netfilter' Jul 6 23:59:03.806874 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:59:03.809190 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:59:03.820266 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:59:03.831795 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:59:03.834151 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:03.836707 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:59:03.840629 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:59:03.844206 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:59:03.851516 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:59:03.868706 dracut-cmdline[210]: dracut-dracut-053 Jul 6 23:59:03.873706 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:03.897449 systemd-resolved[213]: Positive Trust Anchors: Jul 6 23:59:03.897471 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:59:03.897534 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:59:03.907704 systemd-resolved[213]: Defaulting to hostname 'linux'. Jul 6 23:59:03.910717 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:59:03.912080 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:59:03.963595 kernel: SCSI subsystem initialized Jul 6 23:59:03.973586 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:59:03.985581 kernel: iscsi: registered transport (tcp) Jul 6 23:59:04.007822 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:59:04.007920 kernel: QLogic iSCSI HBA Driver Jul 6 23:59:04.049011 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:59:04.054754 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:59:04.082783 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:59:04.082872 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:59:04.082896 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:59:04.126584 kernel: raid6: avx512x4 gen() 17323 MB/s Jul 6 23:59:04.144591 kernel: raid6: avx512x2 gen() 16836 MB/s Jul 6 23:59:04.162583 kernel: raid6: avx512x1 gen() 16952 MB/s Jul 6 23:59:04.180599 kernel: raid6: avx2x4 gen() 16810 MB/s Jul 6 23:59:04.198584 kernel: raid6: avx2x2 gen() 17125 MB/s Jul 6 23:59:04.217469 kernel: raid6: avx2x1 gen() 12553 MB/s Jul 6 23:59:04.217545 kernel: raid6: using algorithm avx512x4 gen() 17323 MB/s Jul 6 23:59:04.235877 kernel: raid6: .... xor() 6399 MB/s, rmw enabled Jul 6 23:59:04.235959 kernel: raid6: using avx512x2 recovery algorithm Jul 6 23:59:04.258587 kernel: xor: automatically using best checksumming function avx Jul 6 23:59:04.421590 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:59:04.432934 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:59:04.439956 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:59:04.455723 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jul 6 23:59:04.461017 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:59:04.468790 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:59:04.492910 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jul 6 23:59:04.526377 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:59:04.537822 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:59:04.592416 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:59:04.603859 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:59:04.634610 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:59:04.638138 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:59:04.640845 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:59:04.642169 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:59:04.649851 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:59:04.681511 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:59:04.693199 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 6 23:59:04.693502 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 6 23:59:04.698576 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 6 23:59:04.708580 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:59:04.714600 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:cf:2c:de:71:c9 Jul 6 23:59:04.721880 (udev-worker)[441]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:59:04.754583 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:59:04.754657 kernel: AES CTR mode by8 optimization enabled Jul 6 23:59:04.751757 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:59:04.751935 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:04.752872 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:04.765301 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 6 23:59:04.765565 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 6 23:59:04.753521 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:59:04.753736 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:04.754340 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:04.769482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:04.780575 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 6 23:59:04.785467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:59:04.785671 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:04.790865 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:59:04.790919 kernel: GPT:9289727 != 16777215 Jul 6 23:59:04.793457 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:59:04.793523 kernel: GPT:9289727 != 16777215 Jul 6 23:59:04.795567 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:59:04.796761 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:59:04.803172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:04.824206 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:04.829376 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:04.855677 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:04.872647 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (441) Jul 6 23:59:04.903615 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (448) Jul 6 23:59:04.950609 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 6 23:59:04.967543 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 6 23:59:04.975066 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:59:04.981389 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 6 23:59:04.982109 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 6 23:59:04.995850 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:59:05.003227 disk-uuid[625]: Primary Header is updated. Jul 6 23:59:05.003227 disk-uuid[625]: Secondary Entries is updated. Jul 6 23:59:05.003227 disk-uuid[625]: Secondary Header is updated. Jul 6 23:59:05.009592 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:59:05.014584 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:59:05.022577 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:59:06.029285 disk-uuid[626]: The operation has completed successfully. Jul 6 23:59:06.030073 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:59:06.171669 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:59:06.171807 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:59:06.201070 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:59:06.205747 sh[969]: Success Jul 6 23:59:06.230605 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:59:06.329702 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:59:06.344948 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:59:06.348101 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:59:06.385754 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:59:06.385834 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:06.387632 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:59:06.390460 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:59:06.390529 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:59:06.474584 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:59:06.486757 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:59:06.488234 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:59:06.495816 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:59:06.498768 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:59:06.521202 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:06.521281 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:06.524713 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:59:06.532102 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:59:06.543818 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:59:06.547207 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:06.554454 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:59:06.559852 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:59:06.613799 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:59:06.619814 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:59:06.653371 systemd-networkd[1161]: lo: Link UP Jul 6 23:59:06.653384 systemd-networkd[1161]: lo: Gained carrier Jul 6 23:59:06.655326 systemd-networkd[1161]: Enumeration completed Jul 6 23:59:06.655470 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:59:06.656281 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:59:06.656286 systemd-networkd[1161]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:59:06.656406 systemd[1]: Reached target network.target - Network. Jul 6 23:59:06.659838 systemd-networkd[1161]: eth0: Link UP Jul 6 23:59:06.659843 systemd-networkd[1161]: eth0: Gained carrier Jul 6 23:59:06.659859 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:59:06.672944 systemd-networkd[1161]: eth0: DHCPv4 address 172.31.19.107/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:59:06.877603 ignition[1083]: Ignition 2.19.0 Jul 6 23:59:06.877621 ignition[1083]: Stage: fetch-offline Jul 6 23:59:06.877921 ignition[1083]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:06.877936 ignition[1083]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:59:06.878299 ignition[1083]: Ignition finished successfully Jul 6 23:59:06.880507 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:59:06.889819 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:59:06.905627 ignition[1170]: Ignition 2.19.0 Jul 6 23:59:06.905641 ignition[1170]: Stage: fetch Jul 6 23:59:06.906130 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:06.906145 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:59:06.906275 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:59:06.914591 ignition[1170]: PUT result: OK Jul 6 23:59:06.916419 ignition[1170]: parsed url from cmdline: "" Jul 6 23:59:06.916487 ignition[1170]: no config URL provided Jul 6 23:59:06.916510 ignition[1170]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:59:06.916526 ignition[1170]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:59:06.916570 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:59:06.917261 ignition[1170]: PUT result: OK Jul 6 23:59:06.917319 ignition[1170]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 6 23:59:06.917873 ignition[1170]: GET result: OK Jul 6 23:59:06.917948 ignition[1170]: parsing config with SHA512: 95591c85f5f5e4cf9670844249c30ab3eac7555fe603e326ea19854f41a10e62adb1d055024b2c3365d461da3dca8f209419bcf4f6d5ffefe444793b4aa94ad3 Jul 6 23:59:06.922627 unknown[1170]: fetched base config from "system" Jul 6 23:59:06.922637 unknown[1170]: fetched base config from "system" Jul 6 23:59:06.924424 unknown[1170]: fetched user config from "aws" Jul 6 23:59:06.927061 ignition[1170]: fetch: fetch complete Jul 6 23:59:06.927081 ignition[1170]: fetch: fetch passed Jul 6 23:59:06.927279 ignition[1170]: Ignition finished successfully Jul 6 23:59:06.929642 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:59:06.935823 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:59:06.953080 ignition[1177]: Ignition 2.19.0 Jul 6 23:59:06.953094 ignition[1177]: Stage: kargs Jul 6 23:59:06.953677 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:06.953692 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:59:06.953823 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:59:06.955239 ignition[1177]: PUT result: OK Jul 6 23:59:06.957948 ignition[1177]: kargs: kargs passed Jul 6 23:59:06.958033 ignition[1177]: Ignition finished successfully Jul 6 23:59:06.960110 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:59:06.964952 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:59:06.982810 ignition[1183]: Ignition 2.19.0 Jul 6 23:59:06.982824 ignition[1183]: Stage: disks Jul 6 23:59:06.983320 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:06.983335 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:59:06.983457 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:59:06.984847 ignition[1183]: PUT result: OK Jul 6 23:59:06.987757 ignition[1183]: disks: disks passed Jul 6 23:59:06.987842 ignition[1183]: Ignition finished successfully Jul 6 23:59:06.989432 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:59:06.990500 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:59:06.990947 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:59:06.991506 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:59:06.992095 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:59:06.992858 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:59:06.998838 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:59:07.030420 systemd-fsck[1191]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:59:07.033475 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:59:07.039714 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:59:07.154577 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:59:07.155138 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:59:07.156399 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:59:07.169770 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:59:07.173245 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:59:07.175621 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:59:07.176179 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:59:07.176220 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:59:07.196162 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:59:07.202387 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1210) Jul 6 23:59:07.213772 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:07.213853 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:07.213874 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:59:07.215794 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:59:07.220873 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:59:07.223123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:59:07.515212 initrd-setup-root[1234]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:59:07.556076 initrd-setup-root[1241]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:59:07.571827 initrd-setup-root[1248]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:59:07.577620 initrd-setup-root[1255]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:59:07.827000 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:59:07.832952 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:59:07.835495 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:59:07.848483 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:59:07.851016 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:07.885574 ignition[1323]: INFO : Ignition 2.19.0 Jul 6 23:59:07.885574 ignition[1323]: INFO : Stage: mount Jul 6 23:59:07.885574 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:07.885574 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:59:07.885574 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:59:07.890223 ignition[1323]: INFO : PUT result: OK Jul 6 23:59:07.890321 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:59:07.893568 ignition[1323]: INFO : mount: mount passed Jul 6 23:59:07.894204 ignition[1323]: INFO : Ignition finished successfully Jul 6 23:59:07.895160 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:59:07.901740 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:59:07.918917 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:59:07.937593 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1336) Jul 6 23:59:07.941995 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:07.942076 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:07.942099 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:59:07.949587 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:59:07.952271 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:59:07.987562 ignition[1353]: INFO : Ignition 2.19.0 Jul 6 23:59:07.987562 ignition[1353]: INFO : Stage: files Jul 6 23:59:07.989383 ignition[1353]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:07.989383 ignition[1353]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:59:07.989383 ignition[1353]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:59:07.989383 ignition[1353]: INFO : PUT result: OK Jul 6 23:59:07.992206 ignition[1353]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:59:07.993148 ignition[1353]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:59:07.993148 ignition[1353]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:59:08.009595 ignition[1353]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:59:08.010870 ignition[1353]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:59:08.010870 ignition[1353]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:59:08.010303 unknown[1353]: wrote ssh authorized keys file for user: core Jul 6 23:59:08.023091 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:59:08.023091 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:59:08.023091 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:59:08.023091 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:59:08.116502 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:59:08.178781 systemd-networkd[1161]: eth0: Gained IPv6LL Jul 6 23:59:08.335806 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:59:08.337076 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:59:09.034716 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:59:09.387092 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:59:09.387092 ignition[1353]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 6 23:59:09.389974 ignition[1353]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:59:09.391126 ignition[1353]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:59:09.391126 ignition[1353]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 6 23:59:09.391126 ignition[1353]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 6 23:59:09.391126 ignition[1353]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:59:09.391126 ignition[1353]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:59:09.391126 ignition[1353]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 6 23:59:09.391126 ignition[1353]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:59:09.391126 ignition[1353]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:59:09.391126 ignition[1353]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:59:09.391126 ignition[1353]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:59:09.391126 ignition[1353]: INFO : files: files passed Jul 6 23:59:09.404980 ignition[1353]: INFO : Ignition finished successfully Jul 6 23:59:09.393159 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:59:09.400890 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:59:09.408701 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:59:09.411041 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:59:09.411209 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:59:09.437770 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:09.437770 initrd-setup-root-after-ignition[1381]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:09.441402 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:09.442932 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:59:09.444126 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:59:09.449774 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:59:09.477728 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:59:09.477868 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:59:09.479601 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:59:09.480404 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:59:09.481359 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:59:09.487750 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:59:09.501043 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:59:09.505781 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:59:09.520393 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:59:09.521258 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:59:09.522326 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:59:09.523234 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:59:09.523425 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:59:09.524825 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:59:09.525692 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:59:09.526495 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:59:09.527279 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:59:09.528050 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:59:09.528981 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:59:09.529731 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:59:09.530494 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:59:09.531690 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:59:09.532372 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:59:09.533245 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:59:09.533433 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:59:09.534503 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:59:09.535297 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:59:09.535988 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:59:09.536141 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:59:09.536964 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:59:09.537149 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:59:09.538449 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:59:09.538668 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:59:09.539347 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:59:09.539502 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:59:09.546833 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:59:09.551614 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:59:09.553043 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:59:09.553806 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:59:09.557458 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:59:09.559437 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:59:09.567584 ignition[1406]: INFO : Ignition 2.19.0 Jul 6 23:59:09.567584 ignition[1406]: INFO : Stage: umount Jul 6 23:59:09.567584 ignition[1406]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:09.567584 ignition[1406]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:59:09.572777 ignition[1406]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:59:09.572777 ignition[1406]: INFO : PUT result: OK Jul 6 23:59:09.569538 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:59:09.569702 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:59:09.580400 ignition[1406]: INFO : umount: umount passed Jul 6 23:59:09.580400 ignition[1406]: INFO : Ignition finished successfully Jul 6 23:59:09.582084 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:59:09.582228 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:59:09.584059 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:59:09.584175 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:59:09.585759 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:59:09.585827 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:59:09.586525 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:59:09.587225 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:59:09.588692 systemd[1]: Stopped target network.target - Network. Jul 6 23:59:09.589812 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:59:09.589890 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:59:09.591878 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:59:09.592448 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:59:09.596717 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:59:09.597238 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:59:09.598273 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:59:09.598983 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:59:09.599043 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:59:09.599679 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:59:09.599736 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:59:09.600342 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:59:09.600416 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:59:09.601199 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:59:09.601265 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:59:09.602250 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:59:09.603144 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:59:09.605484 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:59:09.605633 systemd-networkd[1161]: eth0: DHCPv6 lease lost Jul 6 23:59:09.607653 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:59:09.608701 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:59:09.609940 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:59:09.610092 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:59:09.612411 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:59:09.612765 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:59:09.615197 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:59:09.615258 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:59:09.615719 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:59:09.615787 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:59:09.621806 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:59:09.622449 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:59:09.622531 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:59:09.623275 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:59:09.623340 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:59:09.623876 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:59:09.623926 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:59:09.624401 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:59:09.624456 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:59:09.625302 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:59:09.641839 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:59:09.642066 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:59:09.643657 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:59:09.643782 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:59:09.644977 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:59:09.645025 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:59:09.645507 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:59:09.645625 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:59:09.647080 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:59:09.647143 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:59:09.648163 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:59:09.648227 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:09.653755 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:59:09.654924 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:59:09.655620 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:59:09.657183 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:59:09.657255 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:59:09.658073 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:59:09.658133 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:59:09.659113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:59:09.659172 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:09.661688 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:59:09.661824 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:59:09.669372 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:59:09.669532 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:59:09.670855 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:59:09.674809 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:59:09.689575 systemd[1]: Switching root. Jul 6 23:59:09.714636 systemd-journald[178]: Journal stopped Jul 6 23:59:11.423055 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jul 6 23:59:11.423116 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:59:11.423135 kernel: SELinux: policy capability open_perms=1 Jul 6 23:59:11.423147 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:59:11.423159 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:59:11.423174 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:59:11.423190 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:59:11.423202 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:59:11.423214 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:59:11.423230 kernel: audit: type=1403 audit(1751846350.289:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:59:11.423243 systemd[1]: Successfully loaded SELinux policy in 52.132ms. Jul 6 23:59:11.423268 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.011ms. Jul 6 23:59:11.423283 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:59:11.423296 systemd[1]: Detected virtualization amazon. Jul 6 23:59:11.423309 systemd[1]: Detected architecture x86-64. Jul 6 23:59:11.425611 systemd[1]: Detected first boot. Jul 6 23:59:11.425645 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:59:11.425659 zram_generator::config[1465]: No configuration found. Jul 6 23:59:11.425674 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:59:11.425694 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:59:11.425712 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 6 23:59:11.425726 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:59:11.425739 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:59:11.425751 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:59:11.425764 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:59:11.425776 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:59:11.425789 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:59:11.425801 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:59:11.425820 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:59:11.425832 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:59:11.425845 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:59:11.425858 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:59:11.425870 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:59:11.425882 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:59:11.425895 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:59:11.425907 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:59:11.425922 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:59:11.425935 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:59:11.425947 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:59:11.425963 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:59:11.425975 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:59:11.425987 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:59:11.426000 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:59:11.426012 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:59:11.426027 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:59:11.426039 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:59:11.426051 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:59:11.426063 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:59:11.426075 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:59:11.426089 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:59:11.426101 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:59:11.426114 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:59:11.426126 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:59:11.426138 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:11.426153 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:59:11.426166 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:59:11.426177 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:59:11.426190 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:59:11.426203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:59:11.426215 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:59:11.426227 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:59:11.426239 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:59:11.426255 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:59:11.426270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:59:11.426284 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:59:11.426296 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:59:11.426308 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:59:11.426321 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 6 23:59:11.426334 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 6 23:59:11.426346 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:59:11.426360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:59:11.426375 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:59:11.426388 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:59:11.426400 kernel: loop: module loaded Jul 6 23:59:11.426414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:59:11.426428 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:11.426479 systemd-journald[1569]: Collecting audit messages is disabled. Jul 6 23:59:11.426505 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:59:11.426520 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:59:11.426532 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:59:11.426544 kernel: ACPI: bus type drm_connector registered Jul 6 23:59:11.427603 kernel: fuse: init (API version 7.39) Jul 6 23:59:11.427621 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:59:11.427639 systemd-journald[1569]: Journal started Jul 6 23:59:11.427669 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec24d2483cece4fc9ebfc89f7e8f60eb) is 4.7M, max 38.2M, 33.4M free. Jul 6 23:59:11.429640 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:59:11.434570 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:59:11.435678 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:59:11.436706 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:59:11.437525 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:59:11.438273 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:59:11.438445 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:59:11.439087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:59:11.439252 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:59:11.439964 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:59:11.440126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:59:11.441017 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:59:11.441187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:59:11.441850 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:59:11.442016 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:59:11.442667 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:59:11.442951 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:59:11.443787 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:59:11.444502 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:59:11.445368 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:59:11.456388 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:59:11.462742 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:59:11.468779 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:59:11.470498 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:59:11.481772 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:59:11.492185 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:59:11.493411 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:59:11.504958 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:59:11.506875 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:59:11.512834 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:59:11.526784 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:59:11.533766 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec24d2483cece4fc9ebfc89f7e8f60eb is 73.263ms for 971 entries. Jul 6 23:59:11.533766 systemd-journald[1569]: System Journal (/var/log/journal/ec24d2483cece4fc9ebfc89f7e8f60eb) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:59:11.628748 systemd-journald[1569]: Received client request to flush runtime journal. Jul 6 23:59:11.545545 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:59:11.548119 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:59:11.557063 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:59:11.564116 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:59:11.572819 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:59:11.582842 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:59:11.629534 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Jul 6 23:59:11.629594 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Jul 6 23:59:11.635482 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:59:11.641670 udevadm[1625]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:59:11.646848 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:59:11.660866 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:59:11.662964 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:59:11.710885 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:59:11.722857 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:59:11.748326 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Jul 6 23:59:11.748359 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Jul 6 23:59:11.756841 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:59:12.368505 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:59:12.381925 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:59:12.424036 systemd-udevd[1645]: Using default interface naming scheme 'v255'. Jul 6 23:59:12.479376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:59:12.488849 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:59:12.521792 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:59:12.533257 (udev-worker)[1659]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:59:12.541543 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 6 23:59:12.597738 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:59:12.619121 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 6 23:59:12.669579 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 6 23:59:12.684701 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jul 6 23:59:12.699919 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:59:12.700030 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 6 23:59:12.707774 systemd-networkd[1648]: lo: Link UP Jul 6 23:59:12.708200 systemd-networkd[1648]: lo: Gained carrier Jul 6 23:59:12.708580 kernel: ACPI: button: Sleep Button [SLPF] Jul 6 23:59:12.715741 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:59:12.713526 systemd-networkd[1648]: Enumeration completed Jul 6 23:59:12.714110 systemd-networkd[1648]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:59:12.714116 systemd-networkd[1648]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:59:12.716951 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:59:12.719852 systemd-networkd[1648]: eth0: Link UP Jul 6 23:59:12.720456 systemd-networkd[1648]: eth0: Gained carrier Jul 6 23:59:12.720679 systemd-networkd[1648]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:59:12.727067 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:59:12.728716 systemd-networkd[1648]: eth0: DHCPv4 address 172.31.19.107/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:59:12.741812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:12.759573 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1647) Jul 6 23:59:12.917673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:59:12.918644 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:59:12.926035 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:59:12.937312 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:12.955413 lvm[1766]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:59:12.982837 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:59:12.984058 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:59:12.988986 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:59:12.995938 lvm[1772]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:59:13.024811 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:59:13.026261 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:59:13.026888 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:59:13.026991 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:59:13.027444 systemd[1]: Reached target machines.target - Containers. Jul 6 23:59:13.029411 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:59:13.034780 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:59:13.036788 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:59:13.037358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:59:13.039709 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:59:13.048745 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:59:13.050697 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:59:13.052168 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:59:13.070636 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:59:13.088688 kernel: loop0: detected capacity change from 0 to 142488 Jul 6 23:59:13.091159 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:59:13.092976 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:59:13.164762 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:59:13.181753 kernel: loop1: detected capacity change from 0 to 221472 Jul 6 23:59:13.240751 kernel: loop2: detected capacity change from 0 to 140768 Jul 6 23:59:13.320707 kernel: loop3: detected capacity change from 0 to 61336 Jul 6 23:59:13.361074 kernel: loop4: detected capacity change from 0 to 142488 Jul 6 23:59:13.402583 kernel: loop5: detected capacity change from 0 to 221472 Jul 6 23:59:13.438697 kernel: loop6: detected capacity change from 0 to 140768 Jul 6 23:59:13.461590 kernel: loop7: detected capacity change from 0 to 61336 Jul 6 23:59:13.478900 (sd-merge)[1794]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 6 23:59:13.479531 (sd-merge)[1794]: Merged extensions into '/usr'. Jul 6 23:59:13.488449 systemd[1]: Reloading requested from client PID 1780 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:59:13.488470 systemd[1]: Reloading... Jul 6 23:59:13.588584 zram_generator::config[1822]: No configuration found. Jul 6 23:59:13.770756 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:59:13.873228 systemd[1]: Reloading finished in 383 ms. Jul 6 23:59:13.892189 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:59:13.903881 systemd[1]: Starting ensure-sysext.service... Jul 6 23:59:13.906733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:59:13.928983 systemd[1]: Reloading requested from client PID 1879 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:59:13.929004 systemd[1]: Reloading... Jul 6 23:59:13.942897 systemd-tmpfiles[1880]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:59:13.943424 systemd-tmpfiles[1880]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:59:13.944761 systemd-tmpfiles[1880]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:59:13.945188 systemd-tmpfiles[1880]: ACLs are not supported, ignoring. Jul 6 23:59:13.945288 systemd-tmpfiles[1880]: ACLs are not supported, ignoring. Jul 6 23:59:13.955483 systemd-tmpfiles[1880]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:59:13.955505 systemd-tmpfiles[1880]: Skipping /boot Jul 6 23:59:13.971406 systemd-tmpfiles[1880]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:59:13.971426 systemd-tmpfiles[1880]: Skipping /boot Jul 6 23:59:14.055316 zram_generator::config[1914]: No configuration found. Jul 6 23:59:14.204134 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:59:14.285140 ldconfig[1776]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:59:14.296104 systemd[1]: Reloading finished in 366 ms. Jul 6 23:59:14.311379 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:59:14.318259 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:59:14.330849 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:59:14.335752 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:59:14.347450 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:59:14.354518 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:59:14.364835 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:59:14.378285 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:14.379228 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:59:14.384900 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:59:14.398969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:59:14.410236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:59:14.412760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:59:14.412988 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:14.424569 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:59:14.427919 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:59:14.434895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:59:14.435146 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:59:14.450139 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:59:14.450426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:59:14.460070 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:14.464774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:59:14.474966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:59:14.487952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:59:14.490006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:59:14.490242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:14.494052 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:59:14.506819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:59:14.507090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:59:14.510210 augenrules[2006]: No rules Jul 6 23:59:14.512794 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:59:14.536313 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:59:14.538481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:59:14.538762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:59:14.546848 systemd[1]: Finished ensure-sysext.service. Jul 6 23:59:14.549755 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:14.550531 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:59:14.554870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:59:14.565018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:59:14.571879 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:59:14.574529 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:59:14.575618 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:59:14.575703 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:59:14.582939 systemd-resolved[1982]: Positive Trust Anchors: Jul 6 23:59:14.582956 systemd-resolved[1982]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:59:14.583015 systemd-resolved[1982]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:59:14.586492 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:59:14.588961 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:14.591748 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:59:14.593286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:59:14.594768 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:59:14.598526 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:59:14.601283 systemd-resolved[1982]: Defaulting to hostname 'linux'. Jul 6 23:59:14.601883 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:59:14.607005 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:59:14.607737 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:59:14.608021 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:59:14.612001 systemd[1]: Reached target network.target - Network. Jul 6 23:59:14.612864 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:59:14.613336 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:59:14.613373 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:59:14.622629 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:59:14.623781 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:59:14.624340 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:59:14.624955 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:59:14.625479 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:59:14.626025 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:59:14.626352 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:59:14.626741 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:59:14.626775 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:59:14.627090 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:59:14.628859 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:59:14.630747 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:59:14.633094 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:59:14.634720 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:59:14.635157 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:59:14.635473 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:59:14.635975 systemd[1]: System is tainted: cgroupsv1 Jul 6 23:59:14.636020 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:59:14.636049 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:59:14.637685 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:59:14.647934 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:59:14.652496 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:59:14.660880 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:59:14.665701 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:59:14.666320 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:59:14.672839 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:59:14.679586 jq[2044]: false Jul 6 23:59:14.680091 systemd[1]: Started ntpd.service - Network Time Service. Jul 6 23:59:14.697234 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:59:14.710654 systemd-networkd[1648]: eth0: Gained IPv6LL Jul 6 23:59:14.724413 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 6 23:59:14.742485 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:59:14.760882 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:59:14.770446 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:59:14.773162 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:59:14.781506 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:59:14.805711 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:59:14.812020 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:59:14.817393 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:59:14.817836 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:59:14.824613 jq[2068]: true Jul 6 23:59:14.833910 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:59:14.834274 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:59:14.848652 extend-filesystems[2045]: Found loop4 Jul 6 23:59:14.848652 extend-filesystems[2045]: Found loop5 Jul 6 23:59:14.848652 extend-filesystems[2045]: Found loop6 Jul 6 23:59:14.848652 extend-filesystems[2045]: Found loop7 Jul 6 23:59:14.848652 extend-filesystems[2045]: Found nvme0n1 Jul 6 23:59:14.848652 extend-filesystems[2045]: Found nvme0n1p1 Jul 6 23:59:14.848652 extend-filesystems[2045]: Found nvme0n1p2 Jul 6 23:59:14.848652 extend-filesystems[2045]: Found nvme0n1p3 Jul 6 23:59:14.848652 extend-filesystems[2045]: Found usr Jul 6 23:59:14.848652 extend-filesystems[2045]: Found nvme0n1p4 Jul 6 23:59:14.848652 extend-filesystems[2045]: Found nvme0n1p6 Jul 6 23:59:14.848652 extend-filesystems[2045]: Found nvme0n1p7 Jul 6 23:59:14.848652 extend-filesystems[2045]: Found nvme0n1p9 Jul 6 23:59:14.848652 extend-filesystems[2045]: Checking size of /dev/nvme0n1p9 Jul 6 23:59:14.887758 dbus-daemon[2042]: [system] SELinux support is enabled Jul 6 23:59:14.852342 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:59:14.911829 update_engine[2063]: I20250706 23:59:14.861819 2063 main.cc:92] Flatcar Update Engine starting Jul 6 23:59:14.868841 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:59:14.912781 dbus-daemon[2042]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1648 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 6 23:59:14.888815 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:59:14.920167 (ntainerd)[2083]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:59:14.944115 jq[2081]: true Jul 6 23:59:14.958952 update_engine[2063]: I20250706 23:59:14.942407 2063 update_check_scheduler.cc:74] Next update check in 10m0s Jul 6 23:59:14.941519 ntpd[2047]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:38 UTC 2025 (1): Starting Jul 6 23:59:14.959336 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:38 UTC 2025 (1): Starting Jul 6 23:59:14.959336 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:59:14.959336 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: ---------------------------------------------------- Jul 6 23:59:14.959336 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:59:14.959336 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:59:14.959336 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: corporation. Support and training for ntp-4 are Jul 6 23:59:14.959336 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: available at https://www.nwtime.org/support Jul 6 23:59:14.959336 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: ---------------------------------------------------- Jul 6 23:59:14.959336 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: proto: precision = 0.090 usec (-23) Jul 6 23:59:14.959336 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: basedate set to 2025-06-24 Jul 6 23:59:14.959336 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: gps base set to 2025-06-29 (week 2373) Jul 6 23:59:14.959849 coreos-metadata[2041]: Jul 06 23:59:14.950 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:59:14.941569 ntpd[2047]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:59:14.941581 ntpd[2047]: ---------------------------------------------------- Jul 6 23:59:14.941592 ntpd[2047]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:59:14.941602 ntpd[2047]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:59:14.941611 ntpd[2047]: corporation. Support and training for ntp-4 are Jul 6 23:59:14.941622 ntpd[2047]: available at https://www.nwtime.org/support Jul 6 23:59:14.941631 ntpd[2047]: ---------------------------------------------------- Jul 6 23:59:14.966713 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:59:14.966713 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:59:14.966713 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:59:14.966713 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: Listen normally on 3 eth0 172.31.19.107:123 Jul 6 23:59:14.966713 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: Listen normally on 4 lo [::1]:123 Jul 6 23:59:14.966713 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: Listen normally on 5 eth0 [fe80::4cf:2cff:fede:71c9%2]:123 Jul 6 23:59:14.966713 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: Listening on routing socket on fd #22 for interface updates Jul 6 23:59:14.951493 ntpd[2047]: proto: precision = 0.090 usec (-23) Jul 6 23:59:14.967061 coreos-metadata[2041]: Jul 06 23:59:14.966 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 6 23:59:14.954831 ntpd[2047]: basedate set to 2025-06-24 Jul 6 23:59:14.954855 ntpd[2047]: gps base set to 2025-06-29 (week 2373) Jul 6 23:59:14.962354 ntpd[2047]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:59:14.962422 ntpd[2047]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:59:14.965767 ntpd[2047]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:59:14.965827 ntpd[2047]: Listen normally on 3 eth0 172.31.19.107:123 Jul 6 23:59:14.965878 ntpd[2047]: Listen normally on 4 lo [::1]:123 Jul 6 23:59:14.965939 ntpd[2047]: Listen normally on 5 eth0 [fe80::4cf:2cff:fede:71c9%2]:123 Jul 6 23:59:14.965991 ntpd[2047]: Listening on routing socket on fd #22 for interface updates Jul 6 23:59:14.968786 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:59:14.973976 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:59:14.973976 ntpd[2047]: 6 Jul 23:59:14 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:59:14.974046 coreos-metadata[2041]: Jul 06 23:59:14.973 INFO Fetch successful Jul 6 23:59:14.974046 coreos-metadata[2041]: Jul 06 23:59:14.973 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 6 23:59:14.968829 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:59:14.977308 coreos-metadata[2041]: Jul 06 23:59:14.977 INFO Fetch successful Jul 6 23:59:14.977414 coreos-metadata[2041]: Jul 06 23:59:14.977 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 6 23:59:14.979570 extend-filesystems[2045]: Resized partition /dev/nvme0n1p9 Jul 6 23:59:14.980247 coreos-metadata[2041]: Jul 06 23:59:14.979 INFO Fetch successful Jul 6 23:59:14.980247 coreos-metadata[2041]: Jul 06 23:59:14.979 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 6 23:59:14.985187 coreos-metadata[2041]: Jul 06 23:59:14.985 INFO Fetch successful Jul 6 23:59:14.985187 coreos-metadata[2041]: Jul 06 23:59:14.985 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 6 23:59:14.989460 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:59:14.992963 tar[2072]: linux-amd64/helm Jul 6 23:59:14.997102 extend-filesystems[2102]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:59:15.001386 coreos-metadata[2041]: Jul 06 23:59:15.000 INFO Fetch failed with 404: resource not found Jul 6 23:59:15.001386 coreos-metadata[2041]: Jul 06 23:59:15.000 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 6 23:59:15.002592 coreos-metadata[2041]: Jul 06 23:59:15.002 INFO Fetch successful Jul 6 23:59:15.002592 coreos-metadata[2041]: Jul 06 23:59:15.002 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 6 23:59:15.002669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:15.003457 coreos-metadata[2041]: Jul 06 23:59:15.003 INFO Fetch successful Jul 6 23:59:15.003457 coreos-metadata[2041]: Jul 06 23:59:15.003 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 6 23:59:15.004223 coreos-metadata[2041]: Jul 06 23:59:15.004 INFO Fetch successful Jul 6 23:59:15.004223 coreos-metadata[2041]: Jul 06 23:59:15.004 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 6 23:59:15.011051 coreos-metadata[2041]: Jul 06 23:59:15.010 INFO Fetch successful Jul 6 23:59:15.011051 coreos-metadata[2041]: Jul 06 23:59:15.010 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 6 23:59:15.015244 coreos-metadata[2041]: Jul 06 23:59:15.012 INFO Fetch successful Jul 6 23:59:15.014982 dbus-daemon[2042]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:59:15.017860 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:59:15.022760 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:59:15.022820 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:59:15.027727 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:59:15.027762 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:59:15.044892 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 6 23:59:15.096026 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:59:15.130512 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 6 23:59:15.132031 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:59:15.136259 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:59:15.146069 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 6 23:59:15.168543 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:59:15.208111 systemd-logind[2061]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:59:15.230829 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 6 23:59:15.208142 systemd-logind[2061]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 6 23:59:15.208170 systemd-logind[2061]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:59:15.219986 systemd-logind[2061]: New seat seat0. Jul 6 23:59:15.224359 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 6 23:59:15.239222 extend-filesystems[2102]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 6 23:59:15.239222 extend-filesystems[2102]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:59:15.239222 extend-filesystems[2102]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 6 23:59:15.271523 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1649) Jul 6 23:59:15.272691 extend-filesystems[2045]: Resized filesystem in /dev/nvme0n1p9 Jul 6 23:59:15.245625 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:59:15.297714 bash[2148]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:59:15.285978 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:59:15.287067 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:59:15.287385 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:59:15.297000 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:59:15.320711 systemd[1]: Starting sshkeys.service... Jul 6 23:59:15.326119 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:59:15.380187 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:59:15.392046 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:59:15.550605 dbus-daemon[2042]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 6 23:59:15.550835 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 6 23:59:15.562359 dbus-daemon[2042]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2126 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 6 23:59:15.573362 systemd[1]: Starting polkit.service - Authorization Manager... Jul 6 23:59:15.667884 polkitd[2213]: Started polkitd version 121 Jul 6 23:59:15.682663 coreos-metadata[2165]: Jul 06 23:59:15.682 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:59:15.683297 amazon-ssm-agent[2142]: Initializing new seelog logger Jul 6 23:59:15.686616 coreos-metadata[2165]: Jul 06 23:59:15.684 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 6 23:59:15.686713 amazon-ssm-agent[2142]: New Seelog Logger Creation Complete Jul 6 23:59:15.686713 amazon-ssm-agent[2142]: 2025/07/06 23:59:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:59:15.686713 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:59:15.687270 amazon-ssm-agent[2142]: 2025/07/06 23:59:15 processing appconfig overrides Jul 6 23:59:15.689639 coreos-metadata[2165]: Jul 06 23:59:15.689 INFO Fetch successful Jul 6 23:59:15.689726 coreos-metadata[2165]: Jul 06 23:59:15.689 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 6 23:59:15.693367 coreos-metadata[2165]: Jul 06 23:59:15.693 INFO Fetch successful Jul 6 23:59:15.695576 amazon-ssm-agent[2142]: 2025/07/06 23:59:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:59:15.697611 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:59:15.697759 amazon-ssm-agent[2142]: 2025/07/06 23:59:15 processing appconfig overrides Jul 6 23:59:15.698195 amazon-ssm-agent[2142]: 2025/07/06 23:59:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:59:15.698195 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:59:15.698294 amazon-ssm-agent[2142]: 2025/07/06 23:59:15 processing appconfig overrides Jul 6 23:59:15.702537 unknown[2165]: wrote ssh authorized keys file for user: core Jul 6 23:59:15.708021 amazon-ssm-agent[2142]: 2025-07-06 23:59:15 INFO Proxy environment variables: Jul 6 23:59:15.745128 polkitd[2213]: Loading rules from directory /etc/polkit-1/rules.d Jul 6 23:59:15.745239 polkitd[2213]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 6 23:59:15.748999 amazon-ssm-agent[2142]: 2025/07/06 23:59:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:59:15.748999 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:59:15.749185 amazon-ssm-agent[2142]: 2025/07/06 23:59:15 processing appconfig overrides Jul 6 23:59:15.762258 update-ssh-keys[2251]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:59:15.774162 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:59:15.780805 polkitd[2213]: Finished loading, compiling and executing 2 rules Jul 6 23:59:15.792052 dbus-daemon[2042]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 6 23:59:15.793300 systemd[1]: Finished sshkeys.service. Jul 6 23:59:15.795370 systemd[1]: Started polkit.service - Authorization Manager. Jul 6 23:59:15.795853 polkitd[2213]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 6 23:59:15.809630 amazon-ssm-agent[2142]: 2025-07-06 23:59:15 INFO https_proxy: Jul 6 23:59:15.909704 amazon-ssm-agent[2142]: 2025-07-06 23:59:15 INFO http_proxy: Jul 6 23:59:15.921719 systemd-hostnamed[2126]: Hostname set to (transient) Jul 6 23:59:15.921862 systemd-resolved[1982]: System hostname changed to 'ip-172-31-19-107'. Jul 6 23:59:16.000359 locksmithd[2129]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:59:16.011637 amazon-ssm-agent[2142]: 2025-07-06 23:59:15 INFO no_proxy: Jul 6 23:59:16.112261 amazon-ssm-agent[2142]: 2025-07-06 23:59:15 INFO Checking if agent identity type OnPrem can be assumed Jul 6 23:59:16.151571 containerd[2083]: time="2025-07-06T23:59:16.150178157Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:59:16.210179 amazon-ssm-agent[2142]: 2025-07-06 23:59:15 INFO Checking if agent identity type EC2 can be assumed Jul 6 23:59:16.288732 containerd[2083]: time="2025-07-06T23:59:16.288397581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:16.294986 containerd[2083]: time="2025-07-06T23:59:16.294923359Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:59:16.295138 containerd[2083]: time="2025-07-06T23:59:16.295121478Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:59:16.298837 containerd[2083]: time="2025-07-06T23:59:16.297594843Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:59:16.298837 containerd[2083]: time="2025-07-06T23:59:16.297844991Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:59:16.298837 containerd[2083]: time="2025-07-06T23:59:16.297873972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:16.298837 containerd[2083]: time="2025-07-06T23:59:16.297947663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:59:16.298837 containerd[2083]: time="2025-07-06T23:59:16.297968981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:16.298837 containerd[2083]: time="2025-07-06T23:59:16.298331366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:59:16.298837 containerd[2083]: time="2025-07-06T23:59:16.298354620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:16.298837 containerd[2083]: time="2025-07-06T23:59:16.298375484Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:59:16.298837 containerd[2083]: time="2025-07-06T23:59:16.298393008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:16.298837 containerd[2083]: time="2025-07-06T23:59:16.298487470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:16.298837 containerd[2083]: time="2025-07-06T23:59:16.298792358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:16.300566 containerd[2083]: time="2025-07-06T23:59:16.299538402Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:59:16.300566 containerd[2083]: time="2025-07-06T23:59:16.299586839Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:59:16.300566 containerd[2083]: time="2025-07-06T23:59:16.299724417Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:59:16.300566 containerd[2083]: time="2025-07-06T23:59:16.299783050Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:59:16.308746 containerd[2083]: time="2025-07-06T23:59:16.307332442Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:59:16.308746 containerd[2083]: time="2025-07-06T23:59:16.307426060Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:59:16.308746 containerd[2083]: time="2025-07-06T23:59:16.307454117Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:59:16.308746 containerd[2083]: time="2025-07-06T23:59:16.307508262Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:59:16.308746 containerd[2083]: time="2025-07-06T23:59:16.307530937Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:59:16.308746 containerd[2083]: time="2025-07-06T23:59:16.307821567Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:59:16.312128 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO Agent will take identity from EC2 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.310827384Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311050957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311080499Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311101498Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311124265Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311145904Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311166855Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311188651Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311211484Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311235610Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311254877Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311278141Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311312445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.312241 containerd[2083]: time="2025-07-06T23:59:16.311332516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.312737 containerd[2083]: time="2025-07-06T23:59:16.311352439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.312737 containerd[2083]: time="2025-07-06T23:59:16.311372239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.312737 containerd[2083]: time="2025-07-06T23:59:16.311391832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.312737 containerd[2083]: time="2025-07-06T23:59:16.311411280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.312737 containerd[2083]: time="2025-07-06T23:59:16.311436255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.312737 containerd[2083]: time="2025-07-06T23:59:16.311457204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.312737 containerd[2083]: time="2025-07-06T23:59:16.311477021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.312737 containerd[2083]: time="2025-07-06T23:59:16.311499429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.312737 containerd[2083]: time="2025-07-06T23:59:16.311519655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.312737 containerd[2083]: time="2025-07-06T23:59:16.311539623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.315785676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.315839544Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.315880782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.315912003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.315929696Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.316003346Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.316031042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.316049683Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.316067613Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.316084996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.316117208Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.316132814Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:59:16.317173 containerd[2083]: time="2025-07-06T23:59:16.316148250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:59:16.317747 containerd[2083]: time="2025-07-06T23:59:16.316568829Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:59:16.317747 containerd[2083]: time="2025-07-06T23:59:16.316675341Z" level=info msg="Connect containerd service" Jul 6 23:59:16.317747 containerd[2083]: time="2025-07-06T23:59:16.316731159Z" level=info msg="using legacy CRI server" Jul 6 23:59:16.317747 containerd[2083]: time="2025-07-06T23:59:16.316740541Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:59:16.317747 containerd[2083]: time="2025-07-06T23:59:16.316890504Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:59:16.325850 containerd[2083]: time="2025-07-06T23:59:16.325343058Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:59:16.326846 containerd[2083]: time="2025-07-06T23:59:16.326655482Z" level=info msg="Start subscribing containerd event" Jul 6 23:59:16.326846 containerd[2083]: time="2025-07-06T23:59:16.326741335Z" level=info msg="Start recovering state" Jul 6 23:59:16.328784 containerd[2083]: time="2025-07-06T23:59:16.328431555Z" level=info msg="Start event monitor" Jul 6 23:59:16.328784 containerd[2083]: time="2025-07-06T23:59:16.328481583Z" level=info msg="Start snapshots syncer" Jul 6 23:59:16.328784 containerd[2083]: time="2025-07-06T23:59:16.328496621Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:59:16.328784 containerd[2083]: time="2025-07-06T23:59:16.328514502Z" level=info msg="Start streaming server" Jul 6 23:59:16.332595 containerd[2083]: time="2025-07-06T23:59:16.330823429Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:59:16.332595 containerd[2083]: time="2025-07-06T23:59:16.330998768Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:59:16.337855 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:59:16.338931 containerd[2083]: time="2025-07-06T23:59:16.337911243Z" level=info msg="containerd successfully booted in 0.192724s" Jul 6 23:59:16.360664 sshd_keygen[2082]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [amazon-ssm-agent] Starting Core Agent Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [Registrar] Starting registrar module Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [EC2Identity] EC2 registration was successful. Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [CredentialRefresher] credentialRefresher has started Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [CredentialRefresher] Starting credentials refresher loop Jul 6 23:59:16.394920 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 6 23:59:16.408242 amazon-ssm-agent[2142]: 2025-07-06 23:59:16 INFO [CredentialRefresher] Next credential rotation will be in 30.291656369066665 minutes Jul 6 23:59:16.412134 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:59:16.429573 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:59:16.441414 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:59:16.441782 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:59:16.459562 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:59:16.487317 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:59:16.496628 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:59:16.511464 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:59:16.514188 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:59:16.693346 tar[2072]: linux-amd64/LICENSE Jul 6 23:59:16.693346 tar[2072]: linux-amd64/README.md Jul 6 23:59:16.710669 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:59:17.408409 amazon-ssm-agent[2142]: 2025-07-06 23:59:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 6 23:59:17.464794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:17.467822 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:59:17.470955 systemd[1]: Startup finished in 8.625s (kernel) + 7.231s (userspace) = 15.856s. Jul 6 23:59:17.481199 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:59:17.510186 amazon-ssm-agent[2142]: 2025-07-06 23:59:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2317) started Jul 6 23:59:17.611151 amazon-ssm-agent[2142]: 2025-07-06 23:59:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 6 23:59:17.703332 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:59:17.710272 systemd[1]: Started sshd@0-172.31.19.107:22-147.75.109.163:54878.service - OpenSSH per-connection server daemon (147.75.109.163:54878). Jul 6 23:59:17.910506 sshd[2346]: Accepted publickey for core from 147.75.109.163 port 54878 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:59:17.914125 sshd[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:17.926618 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:59:17.934015 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:59:17.942782 systemd-logind[2061]: New session 1 of user core. Jul 6 23:59:17.958444 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:59:17.970033 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:59:17.978707 (systemd)[2352]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:59:18.141843 systemd[2352]: Queued start job for default target default.target. Jul 6 23:59:18.142757 systemd[2352]: Created slice app.slice - User Application Slice. Jul 6 23:59:18.142790 systemd[2352]: Reached target paths.target - Paths. Jul 6 23:59:18.142811 systemd[2352]: Reached target timers.target - Timers. Jul 6 23:59:18.151758 systemd[2352]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:59:18.162766 systemd[2352]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:59:18.162862 systemd[2352]: Reached target sockets.target - Sockets. Jul 6 23:59:18.162882 systemd[2352]: Reached target basic.target - Basic System. Jul 6 23:59:18.162944 systemd[2352]: Reached target default.target - Main User Target. Jul 6 23:59:18.162984 systemd[2352]: Startup finished in 173ms. Jul 6 23:59:18.163514 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:59:18.171534 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:59:18.322320 systemd[1]: Started sshd@1-172.31.19.107:22-147.75.109.163:54890.service - OpenSSH per-connection server daemon (147.75.109.163:54890). Jul 6 23:59:18.448067 kubelet[2329]: E0706 23:59:18.448019 2329 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:59:18.451517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:59:18.451852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:59:18.494626 sshd[2364]: Accepted publickey for core from 147.75.109.163 port 54890 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:59:18.496130 sshd[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:18.501857 systemd-logind[2061]: New session 2 of user core. Jul 6 23:59:18.507071 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:59:18.629482 sshd[2364]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:18.633868 systemd[1]: sshd@1-172.31.19.107:22-147.75.109.163:54890.service: Deactivated successfully. Jul 6 23:59:18.641156 systemd-logind[2061]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:59:18.642240 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:59:18.643275 systemd-logind[2061]: Removed session 2. Jul 6 23:59:18.659469 systemd[1]: Started sshd@2-172.31.19.107:22-147.75.109.163:54892.service - OpenSSH per-connection server daemon (147.75.109.163:54892). Jul 6 23:59:18.831282 sshd[2375]: Accepted publickey for core from 147.75.109.163 port 54892 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:59:18.833015 sshd[2375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:18.838309 systemd-logind[2061]: New session 3 of user core. Jul 6 23:59:18.842037 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:59:18.961472 sshd[2375]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:18.965038 systemd[1]: sshd@2-172.31.19.107:22-147.75.109.163:54892.service: Deactivated successfully. Jul 6 23:59:18.969057 systemd-logind[2061]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:59:18.969843 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:59:18.970961 systemd-logind[2061]: Removed session 3. Jul 6 23:59:18.989951 systemd[1]: Started sshd@3-172.31.19.107:22-147.75.109.163:54894.service - OpenSSH per-connection server daemon (147.75.109.163:54894). Jul 6 23:59:19.151745 sshd[2383]: Accepted publickey for core from 147.75.109.163 port 54894 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:59:19.153892 sshd[2383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:19.160067 systemd-logind[2061]: New session 4 of user core. Jul 6 23:59:19.169997 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:59:19.294597 sshd[2383]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:19.298100 systemd[1]: sshd@3-172.31.19.107:22-147.75.109.163:54894.service: Deactivated successfully. Jul 6 23:59:19.304095 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:59:19.305244 systemd-logind[2061]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:59:19.306417 systemd-logind[2061]: Removed session 4. Jul 6 23:59:19.326953 systemd[1]: Started sshd@4-172.31.19.107:22-147.75.109.163:54902.service - OpenSSH per-connection server daemon (147.75.109.163:54902). Jul 6 23:59:19.489504 sshd[2391]: Accepted publickey for core from 147.75.109.163 port 54902 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:59:19.491016 sshd[2391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:19.495922 systemd-logind[2061]: New session 5 of user core. Jul 6 23:59:19.502974 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:59:19.628062 sudo[2395]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:59:19.628480 sudo[2395]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:59:19.640751 sudo[2395]: pam_unix(sudo:session): session closed for user root Jul 6 23:59:19.664900 sshd[2391]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:19.669034 systemd[1]: sshd@4-172.31.19.107:22-147.75.109.163:54902.service: Deactivated successfully. Jul 6 23:59:19.672249 systemd-logind[2061]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:59:19.672494 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:59:19.674288 systemd-logind[2061]: Removed session 5. Jul 6 23:59:19.699149 systemd[1]: Started sshd@5-172.31.19.107:22-147.75.109.163:54910.service - OpenSSH per-connection server daemon (147.75.109.163:54910). Jul 6 23:59:19.854746 sshd[2400]: Accepted publickey for core from 147.75.109.163 port 54910 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:59:19.856328 sshd[2400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:19.861192 systemd-logind[2061]: New session 6 of user core. Jul 6 23:59:19.867947 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:59:19.967997 sudo[2405]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:59:19.968303 sudo[2405]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:59:19.973693 sudo[2405]: pam_unix(sudo:session): session closed for user root Jul 6 23:59:19.979687 sudo[2404]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:59:19.979989 sudo[2404]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:59:20.001158 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:59:20.005371 auditctl[2408]: No rules Jul 6 23:59:20.003969 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:59:20.004235 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:59:20.010296 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:59:20.040390 augenrules[2427]: No rules Jul 6 23:59:20.042542 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:59:20.046814 sudo[2404]: pam_unix(sudo:session): session closed for user root Jul 6 23:59:20.070136 sshd[2400]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:20.075163 systemd[1]: sshd@5-172.31.19.107:22-147.75.109.163:54910.service: Deactivated successfully. Jul 6 23:59:20.081083 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:59:20.081995 systemd-logind[2061]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:59:20.083257 systemd-logind[2061]: Removed session 6. Jul 6 23:59:20.099976 systemd[1]: Started sshd@6-172.31.19.107:22-147.75.109.163:54924.service - OpenSSH per-connection server daemon (147.75.109.163:54924). Jul 6 23:59:20.258541 sshd[2436]: Accepted publickey for core from 147.75.109.163 port 54924 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 6 23:59:20.259497 sshd[2436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:20.264121 systemd-logind[2061]: New session 7 of user core. Jul 6 23:59:20.271942 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:59:20.370215 sudo[2440]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:59:20.370516 sudo[2440]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:59:20.891876 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:59:20.893815 (dockerd)[2457]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:59:21.496960 dockerd[2457]: time="2025-07-06T23:59:21.496887266Z" level=info msg="Starting up" Jul 6 23:59:21.669331 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2523353814-merged.mount: Deactivated successfully. Jul 6 23:59:22.879448 systemd-resolved[1982]: Clock change detected. Flushing caches. Jul 6 23:59:22.912898 dockerd[2457]: time="2025-07-06T23:59:22.912569587Z" level=info msg="Loading containers: start." Jul 6 23:59:23.097886 kernel: Initializing XFRM netlink socket Jul 6 23:59:23.152371 (udev-worker)[2479]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:59:23.215780 systemd-networkd[1648]: docker0: Link UP Jul 6 23:59:23.239771 dockerd[2457]: time="2025-07-06T23:59:23.239711144Z" level=info msg="Loading containers: done." Jul 6 23:59:23.274496 dockerd[2457]: time="2025-07-06T23:59:23.274426713Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:59:23.274730 dockerd[2457]: time="2025-07-06T23:59:23.274572870Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:59:23.274730 dockerd[2457]: time="2025-07-06T23:59:23.274715546Z" level=info msg="Daemon has completed initialization" Jul 6 23:59:23.313037 dockerd[2457]: time="2025-07-06T23:59:23.312918720Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:59:23.313619 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:59:24.366126 containerd[2083]: time="2025-07-06T23:59:24.366080922Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:59:24.931421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085298774.mount: Deactivated successfully. Jul 6 23:59:26.212540 containerd[2083]: time="2025-07-06T23:59:26.212476512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:26.213603 containerd[2083]: time="2025-07-06T23:59:26.213538854Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 6 23:59:26.215180 containerd[2083]: time="2025-07-06T23:59:26.214738532Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:26.218018 containerd[2083]: time="2025-07-06T23:59:26.217974710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:26.219348 containerd[2083]: time="2025-07-06T23:59:26.219301470Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.853173435s" Jul 6 23:59:26.219455 containerd[2083]: time="2025-07-06T23:59:26.219356452Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 6 23:59:26.220164 containerd[2083]: time="2025-07-06T23:59:26.220120163Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:59:27.827609 containerd[2083]: time="2025-07-06T23:59:27.827530238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:27.828641 containerd[2083]: time="2025-07-06T23:59:27.828597307Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 6 23:59:27.829927 containerd[2083]: time="2025-07-06T23:59:27.829881647Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:27.832925 containerd[2083]: time="2025-07-06T23:59:27.832859150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:27.834271 containerd[2083]: time="2025-07-06T23:59:27.834071510Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.613908981s" Jul 6 23:59:27.834271 containerd[2083]: time="2025-07-06T23:59:27.834113622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 6 23:59:27.835029 containerd[2083]: time="2025-07-06T23:59:27.834743976Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:59:29.082489 containerd[2083]: time="2025-07-06T23:59:29.082406288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:29.083927 containerd[2083]: time="2025-07-06T23:59:29.083849243Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 6 23:59:29.084687 containerd[2083]: time="2025-07-06T23:59:29.084546816Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:29.088804 containerd[2083]: time="2025-07-06T23:59:29.088396930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:29.089676 containerd[2083]: time="2025-07-06T23:59:29.089607948Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.25482901s" Jul 6 23:59:29.090079 containerd[2083]: time="2025-07-06T23:59:29.089891179Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 6 23:59:29.091324 containerd[2083]: time="2025-07-06T23:59:29.090814965Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:59:29.640596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:59:29.648248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:30.087988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:30.104753 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:59:30.171408 kubelet[2672]: E0706 23:59:30.171356 2672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:59:30.180006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:59:30.180284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:59:30.320939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361948973.mount: Deactivated successfully. Jul 6 23:59:30.906185 containerd[2083]: time="2025-07-06T23:59:30.906124367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:30.907072 containerd[2083]: time="2025-07-06T23:59:30.907040897Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 6 23:59:30.908681 containerd[2083]: time="2025-07-06T23:59:30.908150575Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:30.910196 containerd[2083]: time="2025-07-06T23:59:30.910162904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:30.911459 containerd[2083]: time="2025-07-06T23:59:30.911092417Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.820246573s" Jul 6 23:59:30.911459 containerd[2083]: time="2025-07-06T23:59:30.911127438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 6 23:59:30.911762 containerd[2083]: time="2025-07-06T23:59:30.911737824Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:59:31.398172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213709082.mount: Deactivated successfully. Jul 6 23:59:32.354312 containerd[2083]: time="2025-07-06T23:59:32.354251245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:32.355473 containerd[2083]: time="2025-07-06T23:59:32.355421014Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 6 23:59:32.356413 containerd[2083]: time="2025-07-06T23:59:32.356358001Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:32.359749 containerd[2083]: time="2025-07-06T23:59:32.359687871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:32.361104 containerd[2083]: time="2025-07-06T23:59:32.360937353Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.449165935s" Jul 6 23:59:32.361104 containerd[2083]: time="2025-07-06T23:59:32.360985745Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:59:32.362196 containerd[2083]: time="2025-07-06T23:59:32.362165030Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:59:32.845555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198700019.mount: Deactivated successfully. Jul 6 23:59:32.875361 containerd[2083]: time="2025-07-06T23:59:32.875298308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:32.878534 containerd[2083]: time="2025-07-06T23:59:32.878470530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:59:32.883234 containerd[2083]: time="2025-07-06T23:59:32.882399240Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:32.887725 containerd[2083]: time="2025-07-06T23:59:32.887652752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:32.889017 containerd[2083]: time="2025-07-06T23:59:32.888346833Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 526.148191ms" Jul 6 23:59:32.889017 containerd[2083]: time="2025-07-06T23:59:32.888383023Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:59:32.890150 containerd[2083]: time="2025-07-06T23:59:32.890115197Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:59:33.433035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191323199.mount: Deactivated successfully. Jul 6 23:59:35.447581 containerd[2083]: time="2025-07-06T23:59:35.447516747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:35.450293 containerd[2083]: time="2025-07-06T23:59:35.450228528Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 6 23:59:35.456829 containerd[2083]: time="2025-07-06T23:59:35.456744402Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:35.467024 containerd[2083]: time="2025-07-06T23:59:35.466761079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:35.470754 containerd[2083]: time="2025-07-06T23:59:35.467791955Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.577638603s" Jul 6 23:59:35.470754 containerd[2083]: time="2025-07-06T23:59:35.467839098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 6 23:59:37.917719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:37.931117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:37.970384 systemd[1]: Reloading requested from client PID 2822 ('systemctl') (unit session-7.scope)... Jul 6 23:59:37.970556 systemd[1]: Reloading... Jul 6 23:59:38.102703 zram_generator::config[2863]: No configuration found. Jul 6 23:59:38.277530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:59:38.365563 systemd[1]: Reloading finished in 394 ms. Jul 6 23:59:38.411761 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:59:38.411878 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:59:38.412274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:38.419908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:38.665914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:38.677265 (kubelet)[2935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:59:38.730168 kubelet[2935]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:59:38.730168 kubelet[2935]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:59:38.730168 kubelet[2935]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:59:38.730808 kubelet[2935]: I0706 23:59:38.730255 2935 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:59:39.000254 kubelet[2935]: I0706 23:59:38.999893 2935 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:59:39.000254 kubelet[2935]: I0706 23:59:38.999926 2935 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:59:39.000425 kubelet[2935]: I0706 23:59:39.000278 2935 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:59:39.056988 kubelet[2935]: E0706 23:59:39.056944 2935 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.107:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:59:39.058261 kubelet[2935]: I0706 23:59:39.058031 2935 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:59:39.084200 kubelet[2935]: E0706 23:59:39.084143 2935 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:59:39.084200 kubelet[2935]: I0706 23:59:39.084194 2935 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:59:39.091868 kubelet[2935]: I0706 23:59:39.091838 2935 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:59:39.093941 kubelet[2935]: I0706 23:59:39.093887 2935 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:59:39.094086 kubelet[2935]: I0706 23:59:39.094053 2935 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:59:39.094277 kubelet[2935]: I0706 23:59:39.094085 2935 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-107","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 6 23:59:39.094375 kubelet[2935]: I0706 23:59:39.094279 2935 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:59:39.094375 kubelet[2935]: I0706 23:59:39.094289 2935 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:59:39.094428 kubelet[2935]: I0706 23:59:39.094396 2935 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:59:39.099254 kubelet[2935]: I0706 23:59:39.099211 2935 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:59:39.099254 kubelet[2935]: I0706 23:59:39.099258 2935 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:59:39.099393 kubelet[2935]: I0706 23:59:39.099304 2935 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:59:39.099393 kubelet[2935]: I0706 23:59:39.099324 2935 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:59:39.106590 kubelet[2935]: W0706 23:59:39.106259 2935 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-107&limit=500&resourceVersion=0": dial tcp 172.31.19.107:6443: connect: connection refused Jul 6 23:59:39.106590 kubelet[2935]: E0706 23:59:39.106333 2935 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-107&limit=500&resourceVersion=0\": dial tcp 172.31.19.107:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:59:39.107972 kubelet[2935]: W0706 23:59:39.107867 2935 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.107:6443: connect: connection refused Jul 6 23:59:39.107972 kubelet[2935]: E0706 23:59:39.107925 2935 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.107:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:59:39.108263 kubelet[2935]: I0706 23:59:39.108165 2935 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:59:39.113143 kubelet[2935]: I0706 23:59:39.113115 2935 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:59:39.113243 kubelet[2935]: W0706 23:59:39.113191 2935 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:59:39.114073 kubelet[2935]: I0706 23:59:39.113927 2935 server.go:1274] "Started kubelet" Jul 6 23:59:39.114230 kubelet[2935]: I0706 23:59:39.114184 2935 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:59:39.114695 kubelet[2935]: I0706 23:59:39.114054 2935 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:59:39.114695 kubelet[2935]: I0706 23:59:39.114464 2935 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:59:39.116848 kubelet[2935]: I0706 23:59:39.116716 2935 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:59:39.122525 kubelet[2935]: I0706 23:59:39.122358 2935 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:59:39.126983 kubelet[2935]: E0706 23:59:39.120458 2935 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.107:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.107:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-107.184fcf00082377d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-107,UID:ip-172-31-19-107,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-107,},FirstTimestamp:2025-07-06 23:59:39.113904082 +0000 UTC m=+0.431965940,LastTimestamp:2025-07-06 23:59:39.113904082 +0000 UTC m=+0.431965940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-107,}" Jul 6 23:59:39.127602 kubelet[2935]: I0706 23:59:39.127217 2935 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:59:39.127602 kubelet[2935]: E0706 23:59:39.127455 2935 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-107\" not found" Jul 6 23:59:39.131710 kubelet[2935]: I0706 23:59:39.131671 2935 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:59:39.135013 kubelet[2935]: I0706 23:59:39.134991 2935 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:59:39.135309 kubelet[2935]: I0706 23:59:39.135180 2935 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:59:39.136459 kubelet[2935]: E0706 23:59:39.136290 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-107?timeout=10s\": dial tcp 172.31.19.107:6443: connect: connection refused" interval="200ms" Jul 6 23:59:39.139775 kubelet[2935]: W0706 23:59:39.139048 2935 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.107:6443: connect: connection refused Jul 6 23:59:39.139775 kubelet[2935]: E0706 23:59:39.139099 2935 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.107:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:59:39.140252 kubelet[2935]: E0706 23:59:39.140188 2935 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:59:39.140252 kubelet[2935]: I0706 23:59:39.140621 2935 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:59:39.140252 kubelet[2935]: I0706 23:59:39.140630 2935 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:59:39.140252 kubelet[2935]: I0706 23:59:39.140711 2935 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:59:39.155088 kubelet[2935]: I0706 23:59:39.154888 2935 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:59:39.157416 kubelet[2935]: I0706 23:59:39.156951 2935 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:59:39.157416 kubelet[2935]: I0706 23:59:39.156986 2935 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:59:39.157416 kubelet[2935]: I0706 23:59:39.157018 2935 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:59:39.157416 kubelet[2935]: E0706 23:59:39.157074 2935 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:59:39.174637 kubelet[2935]: W0706 23:59:39.174558 2935 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.107:6443: connect: connection refused Jul 6 23:59:39.175206 kubelet[2935]: E0706 23:59:39.175170 2935 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.107:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:59:39.186509 kubelet[2935]: I0706 23:59:39.186249 2935 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:59:39.186509 kubelet[2935]: I0706 23:59:39.186268 2935 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:59:39.186509 kubelet[2935]: I0706 23:59:39.186293 2935 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:59:39.191377 kubelet[2935]: I0706 23:59:39.191307 2935 policy_none.go:49] "None policy: Start" Jul 6 23:59:39.193005 kubelet[2935]: I0706 23:59:39.192941 2935 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:59:39.193005 kubelet[2935]: I0706 23:59:39.192980 2935 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:59:39.201109 kubelet[2935]: I0706 23:59:39.200445 2935 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:59:39.201109 kubelet[2935]: I0706 23:59:39.200735 2935 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:59:39.201109 kubelet[2935]: I0706 23:59:39.200756 2935 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:59:39.202932 kubelet[2935]: I0706 23:59:39.202898 2935 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:59:39.206270 kubelet[2935]: E0706 23:59:39.206215 2935 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-107\" not found" Jul 6 23:59:39.303244 kubelet[2935]: I0706 23:59:39.303094 2935 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-107" Jul 6 23:59:39.303526 kubelet[2935]: E0706 23:59:39.303458 2935 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.107:6443/api/v1/nodes\": dial tcp 172.31.19.107:6443: connect: connection refused" node="ip-172-31-19-107" Jul 6 23:59:39.337915 kubelet[2935]: E0706 23:59:39.337853 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-107?timeout=10s\": dial tcp 172.31.19.107:6443: connect: connection refused" interval="400ms" Jul 6 23:59:39.436195 kubelet[2935]: I0706 23:59:39.436136 2935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa9fa5e533519e888690081b0ffe220b-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-107\" (UID: \"fa9fa5e533519e888690081b0ffe220b\") " pod="kube-system/kube-apiserver-ip-172-31-19-107" Jul 6 23:59:39.436195 kubelet[2935]: I0706 23:59:39.436189 2935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa9fa5e533519e888690081b0ffe220b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-107\" (UID: \"fa9fa5e533519e888690081b0ffe220b\") " pod="kube-system/kube-apiserver-ip-172-31-19-107" Jul 6 23:59:39.436377 kubelet[2935]: I0706 23:59:39.436216 2935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b7b3933f49ea6a9d016e662a91d5dcaf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-107\" (UID: \"b7b3933f49ea6a9d016e662a91d5dcaf\") " pod="kube-system/kube-controller-manager-ip-172-31-19-107" Jul 6 23:59:39.436377 kubelet[2935]: I0706 23:59:39.436246 2935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7b3933f49ea6a9d016e662a91d5dcaf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-107\" (UID: \"b7b3933f49ea6a9d016e662a91d5dcaf\") " pod="kube-system/kube-controller-manager-ip-172-31-19-107" Jul 6 23:59:39.436377 kubelet[2935]: I0706 23:59:39.436266 2935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7b3933f49ea6a9d016e662a91d5dcaf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-107\" (UID: \"b7b3933f49ea6a9d016e662a91d5dcaf\") " pod="kube-system/kube-controller-manager-ip-172-31-19-107" Jul 6 23:59:39.436377 kubelet[2935]: I0706 23:59:39.436283 2935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7b3933f49ea6a9d016e662a91d5dcaf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-107\" (UID: \"b7b3933f49ea6a9d016e662a91d5dcaf\") " pod="kube-system/kube-controller-manager-ip-172-31-19-107" Jul 6 23:59:39.436377 kubelet[2935]: I0706 23:59:39.436303 2935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96a64197894b097e5e5c66ff77ec20c7-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-107\" (UID: \"96a64197894b097e5e5c66ff77ec20c7\") " pod="kube-system/kube-scheduler-ip-172-31-19-107" Jul 6 23:59:39.436520 kubelet[2935]: I0706 23:59:39.436319 2935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa9fa5e533519e888690081b0ffe220b-ca-certs\") pod \"kube-apiserver-ip-172-31-19-107\" (UID: \"fa9fa5e533519e888690081b0ffe220b\") " pod="kube-system/kube-apiserver-ip-172-31-19-107" Jul 6 23:59:39.436520 kubelet[2935]: I0706 23:59:39.436333 2935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7b3933f49ea6a9d016e662a91d5dcaf-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-107\" (UID: \"b7b3933f49ea6a9d016e662a91d5dcaf\") " pod="kube-system/kube-controller-manager-ip-172-31-19-107" Jul 6 23:59:39.506020 kubelet[2935]: I0706 23:59:39.505987 2935 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-107" Jul 6 23:59:39.506408 kubelet[2935]: E0706 23:59:39.506352 2935 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.107:6443/api/v1/nodes\": dial tcp 172.31.19.107:6443: connect: connection refused" node="ip-172-31-19-107" Jul 6 23:59:39.567506 containerd[2083]: time="2025-07-06T23:59:39.567296037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-107,Uid:fa9fa5e533519e888690081b0ffe220b,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:39.571233 containerd[2083]: time="2025-07-06T23:59:39.571182362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-107,Uid:b7b3933f49ea6a9d016e662a91d5dcaf,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:39.577737 containerd[2083]: time="2025-07-06T23:59:39.577699098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-107,Uid:96a64197894b097e5e5c66ff77ec20c7,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:39.739093 kubelet[2935]: E0706 23:59:39.739044 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-107?timeout=10s\": dial tcp 172.31.19.107:6443: connect: connection refused" interval="800ms" Jul 6 23:59:39.910493 kubelet[2935]: I0706 23:59:39.910379 2935 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-107" Jul 6 23:59:39.910987 kubelet[2935]: E0706 23:59:39.910941 2935 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.107:6443/api/v1/nodes\": dial tcp 172.31.19.107:6443: connect: connection refused" node="ip-172-31-19-107" Jul 6 23:59:40.065190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount834587870.mount: Deactivated successfully. Jul 6 23:59:40.083251 containerd[2083]: time="2025-07-06T23:59:40.083182244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:40.085234 containerd[2083]: time="2025-07-06T23:59:40.085174692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:40.087080 containerd[2083]: time="2025-07-06T23:59:40.086979830Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:59:40.089008 containerd[2083]: time="2025-07-06T23:59:40.088943809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:59:40.091092 containerd[2083]: time="2025-07-06T23:59:40.091051782Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:40.093827 containerd[2083]: time="2025-07-06T23:59:40.093778683Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:40.095340 containerd[2083]: time="2025-07-06T23:59:40.095241671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:59:40.098875 containerd[2083]: time="2025-07-06T23:59:40.098819921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:40.100624 containerd[2083]: time="2025-07-06T23:59:40.099616128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.840085ms" Jul 6 23:59:40.100624 containerd[2083]: time="2025-07-06T23:59:40.100538838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.266651ms" Jul 6 23:59:40.102454 containerd[2083]: time="2025-07-06T23:59:40.102413125Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.025691ms" Jul 6 23:59:40.289109 containerd[2083]: time="2025-07-06T23:59:40.288864373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:40.289670 containerd[2083]: time="2025-07-06T23:59:40.289094111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:40.289670 containerd[2083]: time="2025-07-06T23:59:40.289123672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:40.289670 containerd[2083]: time="2025-07-06T23:59:40.289216556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:40.290397 containerd[2083]: time="2025-07-06T23:59:40.290292752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:40.290538 containerd[2083]: time="2025-07-06T23:59:40.290382251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:40.290597 containerd[2083]: time="2025-07-06T23:59:40.290528183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:40.290833 containerd[2083]: time="2025-07-06T23:59:40.290790684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:40.291268 containerd[2083]: time="2025-07-06T23:59:40.291165915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:40.291363 containerd[2083]: time="2025-07-06T23:59:40.291250311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:40.291489 containerd[2083]: time="2025-07-06T23:59:40.291416268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:40.293735 containerd[2083]: time="2025-07-06T23:59:40.292874284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:40.341143 kubelet[2935]: W0706 23:59:40.341080 2935 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-107&limit=500&resourceVersion=0": dial tcp 172.31.19.107:6443: connect: connection refused Jul 6 23:59:40.341359 kubelet[2935]: E0706 23:59:40.341153 2935 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-107&limit=500&resourceVersion=0\": dial tcp 172.31.19.107:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:59:40.386504 containerd[2083]: time="2025-07-06T23:59:40.386297598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-107,Uid:96a64197894b097e5e5c66ff77ec20c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"405562eccb247a8bba0bb732cfe33d23fe95a4f3edff0837d04a7ec32b4e0bc3\"" Jul 6 23:59:40.390681 containerd[2083]: time="2025-07-06T23:59:40.390541890Z" level=info msg="CreateContainer within sandbox \"405562eccb247a8bba0bb732cfe33d23fe95a4f3edff0837d04a7ec32b4e0bc3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:59:40.417656 containerd[2083]: time="2025-07-06T23:59:40.417165023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-107,Uid:fa9fa5e533519e888690081b0ffe220b,Namespace:kube-system,Attempt:0,} returns sandbox id \"25bf9ea93e307a3e06bcd67493913a5a69566dae4efacffc2fbd83611c51efb6\"" Jul 6 23:59:40.420465 containerd[2083]: time="2025-07-06T23:59:40.420423812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-107,Uid:b7b3933f49ea6a9d016e662a91d5dcaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"97989de53c1732043c3296a41d90821c422b0938b737f096956ca1f497ee7701\"" Jul 6 23:59:40.421277 containerd[2083]: time="2025-07-06T23:59:40.421013587Z" level=info msg="CreateContainer within sandbox \"25bf9ea93e307a3e06bcd67493913a5a69566dae4efacffc2fbd83611c51efb6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:59:40.425692 containerd[2083]: time="2025-07-06T23:59:40.425618885Z" level=info msg="CreateContainer within sandbox \"405562eccb247a8bba0bb732cfe33d23fe95a4f3edff0837d04a7ec32b4e0bc3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"636cfbb17f269e5742e0bad4dbbd5575539f7f8b1a0d3f48b176f15723a364c8\"" Jul 6 23:59:40.435587 containerd[2083]: time="2025-07-06T23:59:40.435503422Z" level=info msg="StartContainer for \"636cfbb17f269e5742e0bad4dbbd5575539f7f8b1a0d3f48b176f15723a364c8\"" Jul 6 23:59:40.438611 containerd[2083]: time="2025-07-06T23:59:40.438570480Z" level=info msg="CreateContainer within sandbox \"97989de53c1732043c3296a41d90821c422b0938b737f096956ca1f497ee7701\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:59:40.447045 containerd[2083]: time="2025-07-06T23:59:40.446985732Z" level=info msg="CreateContainer within sandbox \"25bf9ea93e307a3e06bcd67493913a5a69566dae4efacffc2fbd83611c51efb6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"69291f5119274a2990465e57fa594fced02fb09d1dd71dbfdf6e85217ed8abd6\"" Jul 6 23:59:40.447669 containerd[2083]: time="2025-07-06T23:59:40.447619172Z" level=info msg="StartContainer for \"69291f5119274a2990465e57fa594fced02fb09d1dd71dbfdf6e85217ed8abd6\"" Jul 6 23:59:40.477159 containerd[2083]: time="2025-07-06T23:59:40.476264758Z" level=info msg="CreateContainer within sandbox \"97989de53c1732043c3296a41d90821c422b0938b737f096956ca1f497ee7701\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1eed787538f7f49f84d142c306ed2ff8760a2129297e205a07b027720fcf5605\"" Jul 6 23:59:40.481560 containerd[2083]: time="2025-07-06T23:59:40.480970568Z" level=info msg="StartContainer for \"1eed787538f7f49f84d142c306ed2ff8760a2129297e205a07b027720fcf5605\"" Jul 6 23:59:40.500700 kubelet[2935]: W0706 23:59:40.499111 2935 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.107:6443: connect: connection refused Jul 6 23:59:40.501005 kubelet[2935]: E0706 23:59:40.500952 2935 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.107:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:59:40.542834 kubelet[2935]: E0706 23:59:40.541261 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-107?timeout=10s\": dial tcp 172.31.19.107:6443: connect: connection refused" interval="1.6s" Jul 6 23:59:40.583434 kubelet[2935]: W0706 23:59:40.583363 2935 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.107:6443: connect: connection refused Jul 6 23:59:40.586482 kubelet[2935]: E0706 23:59:40.586447 2935 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.107:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:59:40.596507 containerd[2083]: time="2025-07-06T23:59:40.596461656Z" level=info msg="StartContainer for \"69291f5119274a2990465e57fa594fced02fb09d1dd71dbfdf6e85217ed8abd6\" returns successfully" Jul 6 23:59:40.607680 containerd[2083]: time="2025-07-06T23:59:40.605283269Z" level=info msg="StartContainer for \"636cfbb17f269e5742e0bad4dbbd5575539f7f8b1a0d3f48b176f15723a364c8\" returns successfully" Jul 6 23:59:40.649141 containerd[2083]: time="2025-07-06T23:59:40.649089722Z" level=info msg="StartContainer for \"1eed787538f7f49f84d142c306ed2ff8760a2129297e205a07b027720fcf5605\" returns successfully" Jul 6 23:59:40.714895 kubelet[2935]: I0706 23:59:40.714545 2935 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-107" Jul 6 23:59:40.715197 kubelet[2935]: E0706 23:59:40.715172 2935 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.107:6443/api/v1/nodes\": dial tcp 172.31.19.107:6443: connect: connection refused" node="ip-172-31-19-107" Jul 6 23:59:40.739226 kubelet[2935]: W0706 23:59:40.739144 2935 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.107:6443: connect: connection refused Jul 6 23:59:40.739797 kubelet[2935]: E0706 23:59:40.739233 2935 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.107:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:59:41.182932 kubelet[2935]: E0706 23:59:41.182894 2935 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.107:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:59:42.319493 kubelet[2935]: I0706 23:59:42.319461 2935 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-107" Jul 6 23:59:43.823776 kubelet[2935]: E0706 23:59:43.823714 2935 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-107\" not found" node="ip-172-31-19-107" Jul 6 23:59:43.851875 kubelet[2935]: I0706 23:59:43.849833 2935 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-107" Jul 6 23:59:44.109921 kubelet[2935]: I0706 23:59:44.109785 2935 apiserver.go:52] "Watching apiserver" Jul 6 23:59:44.136191 kubelet[2935]: I0706 23:59:44.136122 2935 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:59:46.143183 systemd[1]: Reloading requested from client PID 3205 ('systemctl') (unit session-7.scope)... Jul 6 23:59:46.143202 systemd[1]: Reloading... Jul 6 23:59:46.248692 zram_generator::config[3246]: No configuration found. Jul 6 23:59:46.386300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:59:46.484136 systemd[1]: Reloading finished in 340 ms. Jul 6 23:59:46.520378 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:46.539343 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:59:46.539833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:46.561539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:46.828919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:46.842346 (kubelet)[3316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:59:46.894650 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 6 23:59:46.942180 kubelet[3316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:59:46.942180 kubelet[3316]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:59:46.942180 kubelet[3316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:59:46.942180 kubelet[3316]: I0706 23:59:46.939713 3316 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:59:46.950625 kubelet[3316]: I0706 23:59:46.950575 3316 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:59:46.950625 kubelet[3316]: I0706 23:59:46.950611 3316 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:59:46.951000 kubelet[3316]: I0706 23:59:46.950977 3316 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:59:46.952415 kubelet[3316]: I0706 23:59:46.952384 3316 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:59:46.961163 kubelet[3316]: I0706 23:59:46.960959 3316 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:59:46.967702 kubelet[3316]: E0706 23:59:46.966090 3316 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:59:46.967702 kubelet[3316]: I0706 23:59:46.966128 3316 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:59:46.969041 kubelet[3316]: I0706 23:59:46.969025 3316 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:59:46.969579 kubelet[3316]: I0706 23:59:46.969565 3316 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:59:46.969812 kubelet[3316]: I0706 23:59:46.969785 3316 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:59:46.970059 kubelet[3316]: I0706 23:59:46.969877 3316 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-107","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 6 23:59:46.970186 kubelet[3316]: I0706 23:59:46.970177 3316 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:59:46.970238 kubelet[3316]: I0706 23:59:46.970232 3316 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:59:46.970304 kubelet[3316]: I0706 23:59:46.970298 3316 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:59:46.970447 kubelet[3316]: I0706 23:59:46.970440 3316 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:59:46.970931 kubelet[3316]: I0706 23:59:46.970897 3316 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:59:46.971101 kubelet[3316]: I0706 23:59:46.971090 3316 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:59:46.971191 kubelet[3316]: I0706 23:59:46.971181 3316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:59:46.993180 kubelet[3316]: I0706 23:59:46.993136 3316 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:59:46.994165 kubelet[3316]: I0706 23:59:46.994145 3316 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:59:46.994820 kubelet[3316]: I0706 23:59:46.994805 3316 server.go:1274] "Started kubelet" Jul 6 23:59:46.999655 kubelet[3316]: I0706 23:59:46.999595 3316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:59:47.000059 kubelet[3316]: I0706 23:59:47.000031 3316 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:59:47.000752 kubelet[3316]: I0706 23:59:47.000628 3316 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:59:47.004170 kubelet[3316]: I0706 23:59:47.003466 3316 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:59:47.005736 kubelet[3316]: I0706 23:59:47.005717 3316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:59:47.007369 kubelet[3316]: I0706 23:59:47.007345 3316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:59:47.014032 kubelet[3316]: I0706 23:59:47.013072 3316 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:59:47.014032 kubelet[3316]: I0706 23:59:47.013276 3316 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:59:47.014032 kubelet[3316]: I0706 23:59:47.013464 3316 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:59:47.015059 kubelet[3316]: I0706 23:59:47.015023 3316 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:59:47.015343 kubelet[3316]: I0706 23:59:47.015322 3316 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:59:47.016972 kubelet[3316]: E0706 23:59:47.016950 3316 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:59:47.021525 kubelet[3316]: I0706 23:59:47.021503 3316 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:59:47.035698 kubelet[3316]: I0706 23:59:47.035623 3316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:59:47.038342 kubelet[3316]: I0706 23:59:47.038204 3316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:59:47.038342 kubelet[3316]: I0706 23:59:47.038243 3316 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:59:47.038342 kubelet[3316]: I0706 23:59:47.038269 3316 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:59:47.038342 kubelet[3316]: E0706 23:59:47.038323 3316 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:59:47.105160 kubelet[3316]: I0706 23:59:47.102048 3316 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:59:47.105160 kubelet[3316]: I0706 23:59:47.102076 3316 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:59:47.105160 kubelet[3316]: I0706 23:59:47.102099 3316 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:59:47.105160 kubelet[3316]: I0706 23:59:47.102259 3316 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:59:47.105160 kubelet[3316]: I0706 23:59:47.102268 3316 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:59:47.105160 kubelet[3316]: I0706 23:59:47.102287 3316 policy_none.go:49] "None policy: Start" Jul 6 23:59:47.105160 kubelet[3316]: I0706 23:59:47.103288 3316 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:59:47.105160 kubelet[3316]: I0706 23:59:47.103314 3316 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:59:47.105160 kubelet[3316]: I0706 23:59:47.103536 3316 state_mem.go:75] "Updated machine memory state" Jul 6 23:59:47.105737 kubelet[3316]: I0706 23:59:47.105287 3316 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:59:47.105737 kubelet[3316]: I0706 23:59:47.105717 3316 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:59:47.105815 kubelet[3316]: I0706 23:59:47.105732 3316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:59:47.110277 kubelet[3316]: I0706 23:59:47.110012 3316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:59:47.213786 kubelet[3316]: I0706 23:59:47.213756 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa9fa5e533519e888690081b0ffe220b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-107\" (UID: \"fa9fa5e533519e888690081b0ffe220b\") " pod="kube-system/kube-apiserver-ip-172-31-19-107" Jul 6 23:59:47.214565 kubelet[3316]: I0706 23:59:47.213950 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7b3933f49ea6a9d016e662a91d5dcaf-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-107\" (UID: \"b7b3933f49ea6a9d016e662a91d5dcaf\") " pod="kube-system/kube-controller-manager-ip-172-31-19-107" Jul 6 23:59:47.214565 kubelet[3316]: I0706 23:59:47.213972 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b7b3933f49ea6a9d016e662a91d5dcaf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-107\" (UID: \"b7b3933f49ea6a9d016e662a91d5dcaf\") " pod="kube-system/kube-controller-manager-ip-172-31-19-107" Jul 6 23:59:47.214565 kubelet[3316]: I0706 23:59:47.213991 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7b3933f49ea6a9d016e662a91d5dcaf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-107\" (UID: \"b7b3933f49ea6a9d016e662a91d5dcaf\") " pod="kube-system/kube-controller-manager-ip-172-31-19-107" Jul 6 23:59:47.214565 kubelet[3316]: I0706 23:59:47.214010 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7b3933f49ea6a9d016e662a91d5dcaf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-107\" (UID: \"b7b3933f49ea6a9d016e662a91d5dcaf\") " pod="kube-system/kube-controller-manager-ip-172-31-19-107" Jul 6 23:59:47.214565 kubelet[3316]: I0706 23:59:47.213876 3316 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-107" Jul 6 23:59:47.214565 kubelet[3316]: I0706 23:59:47.214026 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa9fa5e533519e888690081b0ffe220b-ca-certs\") pod \"kube-apiserver-ip-172-31-19-107\" (UID: \"fa9fa5e533519e888690081b0ffe220b\") " pod="kube-system/kube-apiserver-ip-172-31-19-107" Jul 6 23:59:47.215576 kubelet[3316]: I0706 23:59:47.214475 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7b3933f49ea6a9d016e662a91d5dcaf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-107\" (UID: \"b7b3933f49ea6a9d016e662a91d5dcaf\") " pod="kube-system/kube-controller-manager-ip-172-31-19-107" Jul 6 23:59:47.215576 kubelet[3316]: I0706 23:59:47.214494 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96a64197894b097e5e5c66ff77ec20c7-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-107\" (UID: \"96a64197894b097e5e5c66ff77ec20c7\") " pod="kube-system/kube-scheduler-ip-172-31-19-107" Jul 6 23:59:47.215576 kubelet[3316]: I0706 23:59:47.214512 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa9fa5e533519e888690081b0ffe220b-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-107\" (UID: \"fa9fa5e533519e888690081b0ffe220b\") " pod="kube-system/kube-apiserver-ip-172-31-19-107" Jul 6 23:59:47.229647 kubelet[3316]: I0706 23:59:47.229130 3316 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-19-107" Jul 6 23:59:47.229647 kubelet[3316]: I0706 23:59:47.229250 3316 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-107" Jul 6 23:59:47.972420 kubelet[3316]: I0706 23:59:47.972383 3316 apiserver.go:52] "Watching apiserver" Jul 6 23:59:48.014041 kubelet[3316]: I0706 23:59:48.013972 3316 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:59:48.091449 kubelet[3316]: E0706 23:59:48.091202 3316 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-107\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-107" Jul 6 23:59:48.136406 kubelet[3316]: I0706 23:59:48.135781 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-107" podStartSLOduration=1.135763493 podStartE2EDuration="1.135763493s" podCreationTimestamp="2025-07-06 23:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:59:48.135142479 +0000 UTC m=+1.284568327" watchObservedRunningTime="2025-07-06 23:59:48.135763493 +0000 UTC m=+1.285189336" Jul 6 23:59:48.169378 kubelet[3316]: I0706 23:59:48.166177 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-107" podStartSLOduration=1.166155236 podStartE2EDuration="1.166155236s" podCreationTimestamp="2025-07-06 23:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:59:48.149419637 +0000 UTC m=+1.298845480" watchObservedRunningTime="2025-07-06 23:59:48.166155236 +0000 UTC m=+1.315581088" Jul 6 23:59:52.780524 kubelet[3316]: I0706 23:59:52.780486 3316 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:59:52.781673 containerd[2083]: time="2025-07-06T23:59:52.781245381Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:59:52.782413 kubelet[3316]: I0706 23:59:52.781820 3316 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:59:53.713249 kubelet[3316]: I0706 23:59:53.712529 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-107" podStartSLOduration=6.712510993 podStartE2EDuration="6.712510993s" podCreationTimestamp="2025-07-06 23:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:59:48.170057026 +0000 UTC m=+1.319482879" watchObservedRunningTime="2025-07-06 23:59:53.712510993 +0000 UTC m=+6.861936842" Jul 6 23:59:53.763232 kubelet[3316]: I0706 23:59:53.763119 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca5d6c47-161b-4d98-8253-cf96ca7b31b2-lib-modules\") pod \"kube-proxy-m49k8\" (UID: \"ca5d6c47-161b-4d98-8253-cf96ca7b31b2\") " pod="kube-system/kube-proxy-m49k8" Jul 6 23:59:53.763232 kubelet[3316]: I0706 23:59:53.763210 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca5d6c47-161b-4d98-8253-cf96ca7b31b2-xtables-lock\") pod \"kube-proxy-m49k8\" (UID: \"ca5d6c47-161b-4d98-8253-cf96ca7b31b2\") " pod="kube-system/kube-proxy-m49k8" Jul 6 23:59:53.763583 kubelet[3316]: I0706 23:59:53.763370 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca5d6c47-161b-4d98-8253-cf96ca7b31b2-kube-proxy\") pod \"kube-proxy-m49k8\" (UID: \"ca5d6c47-161b-4d98-8253-cf96ca7b31b2\") " pod="kube-system/kube-proxy-m49k8" Jul 6 23:59:53.763583 kubelet[3316]: I0706 23:59:53.763407 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjlbq\" (UniqueName: \"kubernetes.io/projected/ca5d6c47-161b-4d98-8253-cf96ca7b31b2-kube-api-access-vjlbq\") pod \"kube-proxy-m49k8\" (UID: \"ca5d6c47-161b-4d98-8253-cf96ca7b31b2\") " pod="kube-system/kube-proxy-m49k8" Jul 6 23:59:53.965526 kubelet[3316]: I0706 23:59:53.965098 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f9698c10-f42f-484d-a151-e6595d5d8bbf-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-tp5nx\" (UID: \"f9698c10-f42f-484d-a151-e6595d5d8bbf\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-tp5nx" Jul 6 23:59:53.965526 kubelet[3316]: I0706 23:59:53.965170 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vzfk\" (UniqueName: \"kubernetes.io/projected/f9698c10-f42f-484d-a151-e6595d5d8bbf-kube-api-access-8vzfk\") pod \"tigera-operator-5bf8dfcb4-tp5nx\" (UID: \"f9698c10-f42f-484d-a151-e6595d5d8bbf\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-tp5nx" Jul 6 23:59:54.020258 containerd[2083]: time="2025-07-06T23:59:54.019849432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m49k8,Uid:ca5d6c47-161b-4d98-8253-cf96ca7b31b2,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:54.058325 containerd[2083]: time="2025-07-06T23:59:54.058210769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:54.058325 containerd[2083]: time="2025-07-06T23:59:54.058271979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:54.058598 containerd[2083]: time="2025-07-06T23:59:54.058288502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:54.058598 containerd[2083]: time="2025-07-06T23:59:54.058398481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:54.124215 containerd[2083]: time="2025-07-06T23:59:54.124083278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m49k8,Uid:ca5d6c47-161b-4d98-8253-cf96ca7b31b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9ad0f4227d3f5b07a2bd4c7fa3e8930b7afca0b2aeee740c47a10f54ebe5e36\"" Jul 6 23:59:54.131472 containerd[2083]: time="2025-07-06T23:59:54.131425250Z" level=info msg="CreateContainer within sandbox \"f9ad0f4227d3f5b07a2bd4c7fa3e8930b7afca0b2aeee740c47a10f54ebe5e36\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:59:54.161487 containerd[2083]: time="2025-07-06T23:59:54.161246784Z" level=info msg="CreateContainer within sandbox \"f9ad0f4227d3f5b07a2bd4c7fa3e8930b7afca0b2aeee740c47a10f54ebe5e36\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b5c4311c56dfd0e6f7176f48e1a6054434b90baf266d21010dd4458aeac2a3a3\"" Jul 6 23:59:54.163907 containerd[2083]: time="2025-07-06T23:59:54.162685052Z" level=info msg="StartContainer for \"b5c4311c56dfd0e6f7176f48e1a6054434b90baf266d21010dd4458aeac2a3a3\"" Jul 6 23:59:54.224238 containerd[2083]: time="2025-07-06T23:59:54.224117812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-tp5nx,Uid:f9698c10-f42f-484d-a151-e6595d5d8bbf,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:59:54.239245 containerd[2083]: time="2025-07-06T23:59:54.239191004Z" level=info msg="StartContainer for \"b5c4311c56dfd0e6f7176f48e1a6054434b90baf266d21010dd4458aeac2a3a3\" returns successfully" Jul 6 23:59:54.271435 containerd[2083]: time="2025-07-06T23:59:54.271263014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:54.272081 containerd[2083]: time="2025-07-06T23:59:54.271969919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:54.272081 containerd[2083]: time="2025-07-06T23:59:54.272033314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:54.272243 containerd[2083]: time="2025-07-06T23:59:54.272198825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:54.345592 containerd[2083]: time="2025-07-06T23:59:54.345546916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-tp5nx,Uid:f9698c10-f42f-484d-a151-e6595d5d8bbf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5d1d36d1a4dc9374996c6c322a5d2076430ae3c568cbdb992d9eefccf1489f95\"" Jul 6 23:59:54.349206 containerd[2083]: time="2025-07-06T23:59:54.347875807Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:59:55.567550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1588799612.mount: Deactivated successfully. Jul 6 23:59:56.363821 containerd[2083]: time="2025-07-06T23:59:56.363766286Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:56.366110 containerd[2083]: time="2025-07-06T23:59:56.366037198Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:59:56.368411 containerd[2083]: time="2025-07-06T23:59:56.368153517Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:56.377603 containerd[2083]: time="2025-07-06T23:59:56.377551711Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:56.378028 containerd[2083]: time="2025-07-06T23:59:56.377992293Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.02793625s" Jul 6 23:59:56.378761 containerd[2083]: time="2025-07-06T23:59:56.378031943Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:59:56.407339 containerd[2083]: time="2025-07-06T23:59:56.407277565Z" level=info msg="CreateContainer within sandbox \"5d1d36d1a4dc9374996c6c322a5d2076430ae3c568cbdb992d9eefccf1489f95\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:59:56.431502 containerd[2083]: time="2025-07-06T23:59:56.431439514Z" level=info msg="CreateContainer within sandbox \"5d1d36d1a4dc9374996c6c322a5d2076430ae3c568cbdb992d9eefccf1489f95\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"197bb42de4bca5ce2f7aeb4206c296bf1689332c2ba444e1b3a6aa4128870d9e\"" Jul 6 23:59:56.432877 containerd[2083]: time="2025-07-06T23:59:56.432095301Z" level=info msg="StartContainer for \"197bb42de4bca5ce2f7aeb4206c296bf1689332c2ba444e1b3a6aa4128870d9e\"" Jul 6 23:59:56.496863 containerd[2083]: time="2025-07-06T23:59:56.496798385Z" level=info msg="StartContainer for \"197bb42de4bca5ce2f7aeb4206c296bf1689332c2ba444e1b3a6aa4128870d9e\" returns successfully" Jul 6 23:59:57.124856 kubelet[3316]: I0706 23:59:57.124300 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m49k8" podStartSLOduration=4.124274921 podStartE2EDuration="4.124274921s" podCreationTimestamp="2025-07-06 23:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:59:55.117342225 +0000 UTC m=+8.266768074" watchObservedRunningTime="2025-07-06 23:59:57.124274921 +0000 UTC m=+10.273700771" Jul 6 23:59:57.653987 kubelet[3316]: I0706 23:59:57.653910 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-tp5nx" podStartSLOduration=2.621798992 podStartE2EDuration="4.653883876s" podCreationTimestamp="2025-07-06 23:59:53 +0000 UTC" firstStartedPulling="2025-07-06 23:59:54.347102452 +0000 UTC m=+7.496528281" lastFinishedPulling="2025-07-06 23:59:56.379187336 +0000 UTC m=+9.528613165" observedRunningTime="2025-07-06 23:59:57.125966613 +0000 UTC m=+10.275392463" watchObservedRunningTime="2025-07-06 23:59:57.653883876 +0000 UTC m=+10.803309784" Jul 7 00:00:00.647587 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jul 7 00:00:00.673130 systemd[1]: Starting mdadm.service - Initiates a check run of an MD array's redundancy information.... Jul 7 00:00:00.735149 systemd[1]: mdadm.service: Deactivated successfully. Jul 7 00:00:00.736165 systemd[1]: Finished mdadm.service - Initiates a check run of an MD array's redundancy information.. Jul 7 00:00:00.754646 systemd[1]: logrotate.service: Deactivated successfully. Jul 7 00:00:01.568197 update_engine[2063]: I20250707 00:00:01.568063 2063 update_attempter.cc:509] Updating boot flags... Jul 7 00:00:01.984904 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:00:01.982532 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:00:01.982604 systemd-resolved[1982]: Flushed all caches. Jul 7 00:00:02.773567 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3703) Jul 7 00:00:05.118927 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3704) Jul 7 00:00:06.940225 sudo[2440]: pam_unix(sudo:session): session closed for user root Jul 7 00:00:06.966922 sshd[2436]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:06.975891 systemd[1]: sshd@6-172.31.19.107:22-147.75.109.163:54924.service: Deactivated successfully. Jul 7 00:00:06.994802 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:00:06.996912 systemd-logind[2061]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:00:07.010758 systemd-logind[2061]: Removed session 7. Jul 7 00:00:12.441694 kubelet[3316]: I0707 00:00:12.440920 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ad609f7-823a-4ac8-93ae-7409a70e7d42-tigera-ca-bundle\") pod \"calico-typha-8567cd9d8b-9hf8b\" (UID: \"6ad609f7-823a-4ac8-93ae-7409a70e7d42\") " pod="calico-system/calico-typha-8567cd9d8b-9hf8b" Jul 7 00:00:12.441694 kubelet[3316]: I0707 00:00:12.440991 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsgq4\" (UniqueName: \"kubernetes.io/projected/6ad609f7-823a-4ac8-93ae-7409a70e7d42-kube-api-access-lsgq4\") pod \"calico-typha-8567cd9d8b-9hf8b\" (UID: \"6ad609f7-823a-4ac8-93ae-7409a70e7d42\") " pod="calico-system/calico-typha-8567cd9d8b-9hf8b" Jul 7 00:00:12.441694 kubelet[3316]: I0707 00:00:12.441026 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6ad609f7-823a-4ac8-93ae-7409a70e7d42-typha-certs\") pod \"calico-typha-8567cd9d8b-9hf8b\" (UID: \"6ad609f7-823a-4ac8-93ae-7409a70e7d42\") " pod="calico-system/calico-typha-8567cd9d8b-9hf8b" Jul 7 00:00:12.746691 containerd[2083]: time="2025-07-07T00:00:12.746532363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8567cd9d8b-9hf8b,Uid:6ad609f7-823a-4ac8-93ae-7409a70e7d42,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:12.829273 containerd[2083]: time="2025-07-07T00:00:12.827706342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:12.829273 containerd[2083]: time="2025-07-07T00:00:12.827808207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:12.829273 containerd[2083]: time="2025-07-07T00:00:12.827826617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:12.829273 containerd[2083]: time="2025-07-07T00:00:12.827974342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:12.947135 kubelet[3316]: I0707 00:00:12.947075 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/657c32e6-62b2-4659-a79b-6811efa3d7af-policysync\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.947357 kubelet[3316]: I0707 00:00:12.947154 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/657c32e6-62b2-4659-a79b-6811efa3d7af-var-run-calico\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.947357 kubelet[3316]: I0707 00:00:12.947179 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzk6b\" (UniqueName: \"kubernetes.io/projected/657c32e6-62b2-4659-a79b-6811efa3d7af-kube-api-access-hzk6b\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.947357 kubelet[3316]: I0707 00:00:12.947213 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/657c32e6-62b2-4659-a79b-6811efa3d7af-node-certs\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.947357 kubelet[3316]: I0707 00:00:12.947235 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/657c32e6-62b2-4659-a79b-6811efa3d7af-cni-net-dir\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.947357 kubelet[3316]: I0707 00:00:12.947258 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/657c32e6-62b2-4659-a79b-6811efa3d7af-cni-bin-dir\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.947581 kubelet[3316]: I0707 00:00:12.947279 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/657c32e6-62b2-4659-a79b-6811efa3d7af-cni-log-dir\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.947581 kubelet[3316]: I0707 00:00:12.947305 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/657c32e6-62b2-4659-a79b-6811efa3d7af-lib-modules\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.947581 kubelet[3316]: I0707 00:00:12.947328 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/657c32e6-62b2-4659-a79b-6811efa3d7af-xtables-lock\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.947581 kubelet[3316]: I0707 00:00:12.947351 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/657c32e6-62b2-4659-a79b-6811efa3d7af-flexvol-driver-host\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.947581 kubelet[3316]: I0707 00:00:12.947378 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/657c32e6-62b2-4659-a79b-6811efa3d7af-tigera-ca-bundle\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.947820 kubelet[3316]: I0707 00:00:12.947405 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/657c32e6-62b2-4659-a79b-6811efa3d7af-var-lib-calico\") pod \"calico-node-xlct9\" (UID: \"657c32e6-62b2-4659-a79b-6811efa3d7af\") " pod="calico-system/calico-node-xlct9" Jul 7 00:00:12.989185 containerd[2083]: time="2025-07-07T00:00:12.984280807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8567cd9d8b-9hf8b,Uid:6ad609f7-823a-4ac8-93ae-7409a70e7d42,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e266c228e2d2544c51b2c608c8c1d93b481fd9bf7ea0af8a6a40eb83e170b00\"" Jul 7 00:00:12.989185 containerd[2083]: time="2025-07-07T00:00:12.987349867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:00:13.053347 kubelet[3316]: E0707 00:00:13.053055 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.053347 kubelet[3316]: W0707 00:00:13.053090 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.055846 kubelet[3316]: E0707 00:00:13.053307 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.056488 kubelet[3316]: E0707 00:00:13.056228 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.056488 kubelet[3316]: W0707 00:00:13.056251 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.056488 kubelet[3316]: E0707 00:00:13.056279 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.056985 kubelet[3316]: E0707 00:00:13.056830 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.056985 kubelet[3316]: W0707 00:00:13.056847 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.056985 kubelet[3316]: E0707 00:00:13.056881 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.059628 kubelet[3316]: E0707 00:00:13.059398 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.059628 kubelet[3316]: W0707 00:00:13.059418 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.059628 kubelet[3316]: E0707 00:00:13.059451 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.062117 kubelet[3316]: E0707 00:00:13.062008 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.062117 kubelet[3316]: W0707 00:00:13.062033 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.062117 kubelet[3316]: E0707 00:00:13.062058 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.064281 kubelet[3316]: E0707 00:00:13.064262 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.064441 kubelet[3316]: W0707 00:00:13.064376 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.064441 kubelet[3316]: E0707 00:00:13.064403 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.068816 kubelet[3316]: E0707 00:00:13.068587 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.068816 kubelet[3316]: W0707 00:00:13.068611 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.068816 kubelet[3316]: E0707 00:00:13.068634 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.166628 kubelet[3316]: E0707 00:00:13.166250 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:13.208927 containerd[2083]: time="2025-07-07T00:00:13.208533183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xlct9,Uid:657c32e6-62b2-4659-a79b-6811efa3d7af,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:13.243881 kubelet[3316]: E0707 00:00:13.243803 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.244419 kubelet[3316]: W0707 00:00:13.244278 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.245803 kubelet[3316]: E0707 00:00:13.244649 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.246837 kubelet[3316]: E0707 00:00:13.246805 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.246837 kubelet[3316]: W0707 00:00:13.246829 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.246987 kubelet[3316]: E0707 00:00:13.246857 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.248430 kubelet[3316]: E0707 00:00:13.248274 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.248430 kubelet[3316]: W0707 00:00:13.248296 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.248430 kubelet[3316]: E0707 00:00:13.248339 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.249555 kubelet[3316]: E0707 00:00:13.248715 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.249555 kubelet[3316]: W0707 00:00:13.248733 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.249555 kubelet[3316]: E0707 00:00:13.248769 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.249555 kubelet[3316]: E0707 00:00:13.249510 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.249555 kubelet[3316]: W0707 00:00:13.249524 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.251206 kubelet[3316]: E0707 00:00:13.249538 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.251206 kubelet[3316]: E0707 00:00:13.249989 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.251206 kubelet[3316]: W0707 00:00:13.250002 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.251206 kubelet[3316]: E0707 00:00:13.250019 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.251206 kubelet[3316]: E0707 00:00:13.250465 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.251206 kubelet[3316]: W0707 00:00:13.250476 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.251206 kubelet[3316]: E0707 00:00:13.250511 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.252938 kubelet[3316]: E0707 00:00:13.251471 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.252938 kubelet[3316]: W0707 00:00:13.251506 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.252938 kubelet[3316]: E0707 00:00:13.251522 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.254515 kubelet[3316]: E0707 00:00:13.254128 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.254515 kubelet[3316]: W0707 00:00:13.254145 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.254515 kubelet[3316]: E0707 00:00:13.254171 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.254515 kubelet[3316]: E0707 00:00:13.254478 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.254515 kubelet[3316]: W0707 00:00:13.254493 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.255272 kubelet[3316]: E0707 00:00:13.254829 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.255560 kubelet[3316]: E0707 00:00:13.255546 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.255653 kubelet[3316]: W0707 00:00:13.255640 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.255822 kubelet[3316]: E0707 00:00:13.255807 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.256296 kubelet[3316]: E0707 00:00:13.256145 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.256296 kubelet[3316]: W0707 00:00:13.256157 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.256296 kubelet[3316]: E0707 00:00:13.256189 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.256296 kubelet[3316]: I0707 00:00:13.256222 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd3bd012-86e5-4807-95d5-ad6901284597-kubelet-dir\") pod \"csi-node-driver-vmlwg\" (UID: \"fd3bd012-86e5-4807-95d5-ad6901284597\") " pod="calico-system/csi-node-driver-vmlwg" Jul 7 00:00:13.257563 kubelet[3316]: E0707 00:00:13.257405 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.257563 kubelet[3316]: W0707 00:00:13.257426 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.257563 kubelet[3316]: E0707 00:00:13.257500 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.257563 kubelet[3316]: I0707 00:00:13.257533 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd3bd012-86e5-4807-95d5-ad6901284597-registration-dir\") pod \"csi-node-driver-vmlwg\" (UID: \"fd3bd012-86e5-4807-95d5-ad6901284597\") " pod="calico-system/csi-node-driver-vmlwg" Jul 7 00:00:13.258575 kubelet[3316]: E0707 00:00:13.258143 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.258575 kubelet[3316]: W0707 00:00:13.258305 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.258575 kubelet[3316]: E0707 00:00:13.258329 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.259130 kubelet[3316]: E0707 00:00:13.259004 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.259130 kubelet[3316]: W0707 00:00:13.259019 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.259473 kubelet[3316]: E0707 00:00:13.259276 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.260422 kubelet[3316]: E0707 00:00:13.260319 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.260422 kubelet[3316]: W0707 00:00:13.260332 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.260902 kubelet[3316]: E0707 00:00:13.260643 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.261449 kubelet[3316]: E0707 00:00:13.261193 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.261449 kubelet[3316]: W0707 00:00:13.261208 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.262482 kubelet[3316]: E0707 00:00:13.262308 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.262482 kubelet[3316]: E0707 00:00:13.262362 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.262482 kubelet[3316]: W0707 00:00:13.262402 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.262820 kubelet[3316]: E0707 00:00:13.262616 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.263185 kubelet[3316]: E0707 00:00:13.263049 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.263185 kubelet[3316]: W0707 00:00:13.263063 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.263185 kubelet[3316]: E0707 00:00:13.263086 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.263752 kubelet[3316]: E0707 00:00:13.263558 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.263752 kubelet[3316]: W0707 00:00:13.263587 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.263752 kubelet[3316]: E0707 00:00:13.263610 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.264356 kubelet[3316]: E0707 00:00:13.264187 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.264356 kubelet[3316]: W0707 00:00:13.264201 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.264630 kubelet[3316]: E0707 00:00:13.264470 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.265154 kubelet[3316]: E0707 00:00:13.264846 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.265154 kubelet[3316]: W0707 00:00:13.264859 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.265154 kubelet[3316]: E0707 00:00:13.264877 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.265523 kubelet[3316]: E0707 00:00:13.265511 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.265611 kubelet[3316]: W0707 00:00:13.265601 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.265753 kubelet[3316]: E0707 00:00:13.265691 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.266150 kubelet[3316]: E0707 00:00:13.266050 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.266150 kubelet[3316]: W0707 00:00:13.266079 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.266150 kubelet[3316]: E0707 00:00:13.266094 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.266605 kubelet[3316]: E0707 00:00:13.266514 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.266605 kubelet[3316]: W0707 00:00:13.266527 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.266605 kubelet[3316]: E0707 00:00:13.266556 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.267216 kubelet[3316]: E0707 00:00:13.267114 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.267216 kubelet[3316]: W0707 00:00:13.267144 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.267216 kubelet[3316]: E0707 00:00:13.267159 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.282474 containerd[2083]: time="2025-07-07T00:00:13.282293575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:13.282474 containerd[2083]: time="2025-07-07T00:00:13.282402047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:13.282474 containerd[2083]: time="2025-07-07T00:00:13.282424740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:13.282955 containerd[2083]: time="2025-07-07T00:00:13.282563438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:13.360037 kubelet[3316]: E0707 00:00:13.359823 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.360037 kubelet[3316]: W0707 00:00:13.359851 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.360037 kubelet[3316]: E0707 00:00:13.359897 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.363345 kubelet[3316]: E0707 00:00:13.362784 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.363345 kubelet[3316]: W0707 00:00:13.362846 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.365118 kubelet[3316]: E0707 00:00:13.362882 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.365118 kubelet[3316]: E0707 00:00:13.364949 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.365118 kubelet[3316]: W0707 00:00:13.364967 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.365118 kubelet[3316]: E0707 00:00:13.364993 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.367402 kubelet[3316]: I0707 00:00:13.363837 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fd3bd012-86e5-4807-95d5-ad6901284597-varrun\") pod \"csi-node-driver-vmlwg\" (UID: \"fd3bd012-86e5-4807-95d5-ad6901284597\") " pod="calico-system/csi-node-driver-vmlwg" Jul 7 00:00:13.367923 kubelet[3316]: E0707 00:00:13.367906 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.368132 kubelet[3316]: W0707 00:00:13.368111 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.368262 kubelet[3316]: E0707 00:00:13.368247 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.369044 kubelet[3316]: E0707 00:00:13.369014 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.369183 kubelet[3316]: W0707 00:00:13.369168 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.369336 kubelet[3316]: E0707 00:00:13.369250 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.372969 kubelet[3316]: I0707 00:00:13.370862 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42q6x\" (UniqueName: \"kubernetes.io/projected/fd3bd012-86e5-4807-95d5-ad6901284597-kube-api-access-42q6x\") pod \"csi-node-driver-vmlwg\" (UID: \"fd3bd012-86e5-4807-95d5-ad6901284597\") " pod="calico-system/csi-node-driver-vmlwg" Jul 7 00:00:13.372969 kubelet[3316]: E0707 00:00:13.371166 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.372969 kubelet[3316]: W0707 00:00:13.371190 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.372969 kubelet[3316]: E0707 00:00:13.371339 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.372969 kubelet[3316]: E0707 00:00:13.372208 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.372969 kubelet[3316]: W0707 00:00:13.372222 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.372969 kubelet[3316]: E0707 00:00:13.372328 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.374726 kubelet[3316]: E0707 00:00:13.374225 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.374726 kubelet[3316]: W0707 00:00:13.374349 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.374726 kubelet[3316]: E0707 00:00:13.374577 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.377483 kubelet[3316]: E0707 00:00:13.376356 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.377483 kubelet[3316]: W0707 00:00:13.376932 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.377483 kubelet[3316]: E0707 00:00:13.377028 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.377483 kubelet[3316]: I0707 00:00:13.377174 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd3bd012-86e5-4807-95d5-ad6901284597-socket-dir\") pod \"csi-node-driver-vmlwg\" (UID: \"fd3bd012-86e5-4807-95d5-ad6901284597\") " pod="calico-system/csi-node-driver-vmlwg" Jul 7 00:00:13.377849 kubelet[3316]: E0707 00:00:13.377835 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.378126 kubelet[3316]: W0707 00:00:13.377960 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.378126 kubelet[3316]: E0707 00:00:13.378033 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.380185 kubelet[3316]: E0707 00:00:13.379986 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.380185 kubelet[3316]: W0707 00:00:13.380002 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.380185 kubelet[3316]: E0707 00:00:13.380087 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.381828 kubelet[3316]: E0707 00:00:13.380839 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.381828 kubelet[3316]: W0707 00:00:13.380873 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.381828 kubelet[3316]: E0707 00:00:13.381050 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.382090 kubelet[3316]: E0707 00:00:13.382077 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.382202 kubelet[3316]: W0707 00:00:13.382187 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.382359 kubelet[3316]: E0707 00:00:13.382340 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.382858 kubelet[3316]: E0707 00:00:13.382776 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.382986 kubelet[3316]: W0707 00:00:13.382972 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.383855 kubelet[3316]: E0707 00:00:13.383632 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.384081 kubelet[3316]: E0707 00:00:13.384069 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.384182 kubelet[3316]: W0707 00:00:13.384169 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.384279 kubelet[3316]: E0707 00:00:13.384267 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.384929 kubelet[3316]: E0707 00:00:13.384901 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.385296 kubelet[3316]: W0707 00:00:13.385172 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.385296 kubelet[3316]: E0707 00:00:13.385196 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.387698 kubelet[3316]: E0707 00:00:13.386208 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.387698 kubelet[3316]: W0707 00:00:13.386223 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.387698 kubelet[3316]: E0707 00:00:13.386237 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.387698 kubelet[3316]: E0707 00:00:13.387108 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.387698 kubelet[3316]: W0707 00:00:13.387121 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.387698 kubelet[3316]: E0707 00:00:13.387135 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.389697 kubelet[3316]: E0707 00:00:13.388105 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.389697 kubelet[3316]: W0707 00:00:13.388136 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.389697 kubelet[3316]: E0707 00:00:13.388152 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.409724 containerd[2083]: time="2025-07-07T00:00:13.409480527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xlct9,Uid:657c32e6-62b2-4659-a79b-6811efa3d7af,Namespace:calico-system,Attempt:0,} returns sandbox id \"67a583cf8a3946c6bd5a85b98d3df2c6b071ee28d57f0b9033669acf9cf9ac1c\"" Jul 7 00:00:13.487442 kubelet[3316]: E0707 00:00:13.487397 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.487442 kubelet[3316]: W0707 00:00:13.487441 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.488445 kubelet[3316]: E0707 00:00:13.487465 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.488445 kubelet[3316]: E0707 00:00:13.487782 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.488445 kubelet[3316]: W0707 00:00:13.487791 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.488445 kubelet[3316]: E0707 00:00:13.487804 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.488445 kubelet[3316]: E0707 00:00:13.488061 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.488445 kubelet[3316]: W0707 00:00:13.488083 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.488445 kubelet[3316]: E0707 00:00:13.488101 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.492702 kubelet[3316]: E0707 00:00:13.489696 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.492702 kubelet[3316]: W0707 00:00:13.489713 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.492702 kubelet[3316]: E0707 00:00:13.489744 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.492702 kubelet[3316]: E0707 00:00:13.490005 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.492702 kubelet[3316]: W0707 00:00:13.490012 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.492702 kubelet[3316]: E0707 00:00:13.490105 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.492702 kubelet[3316]: E0707 00:00:13.490259 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.492702 kubelet[3316]: W0707 00:00:13.490265 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.492702 kubelet[3316]: E0707 00:00:13.490274 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.492702 kubelet[3316]: E0707 00:00:13.490493 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.493040 kubelet[3316]: W0707 00:00:13.490512 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.493040 kubelet[3316]: E0707 00:00:13.490528 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.493040 kubelet[3316]: E0707 00:00:13.490753 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.493040 kubelet[3316]: W0707 00:00:13.490761 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.493040 kubelet[3316]: E0707 00:00:13.490775 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.493040 kubelet[3316]: E0707 00:00:13.490953 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.493040 kubelet[3316]: W0707 00:00:13.490971 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.493040 kubelet[3316]: E0707 00:00:13.490985 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.493040 kubelet[3316]: E0707 00:00:13.491157 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.493040 kubelet[3316]: W0707 00:00:13.491164 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.493301 kubelet[3316]: E0707 00:00:13.491177 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.493301 kubelet[3316]: E0707 00:00:13.491348 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.493301 kubelet[3316]: W0707 00:00:13.491354 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.493301 kubelet[3316]: E0707 00:00:13.491363 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.493301 kubelet[3316]: E0707 00:00:13.491516 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.493301 kubelet[3316]: W0707 00:00:13.491522 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.493301 kubelet[3316]: E0707 00:00:13.491531 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.493301 kubelet[3316]: E0707 00:00:13.491742 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.493301 kubelet[3316]: W0707 00:00:13.491749 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.493301 kubelet[3316]: E0707 00:00:13.491756 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.493569 kubelet[3316]: E0707 00:00:13.491955 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.493569 kubelet[3316]: W0707 00:00:13.491962 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.493569 kubelet[3316]: E0707 00:00:13.491984 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.493569 kubelet[3316]: E0707 00:00:13.492582 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.493569 kubelet[3316]: W0707 00:00:13.492591 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.493569 kubelet[3316]: E0707 00:00:13.492601 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:13.505872 kubelet[3316]: E0707 00:00:13.505840 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:13.505872 kubelet[3316]: W0707 00:00:13.505862 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:13.505872 kubelet[3316]: E0707 00:00:13.505884 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:15.038952 kubelet[3316]: E0707 00:00:15.038694 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:17.040077 kubelet[3316]: E0707 00:00:17.039578 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:19.040185 kubelet[3316]: E0707 00:00:19.038815 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:19.595203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923930855.mount: Deactivated successfully. Jul 7 00:00:20.376902 containerd[2083]: time="2025-07-07T00:00:20.376847078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:20.380107 containerd[2083]: time="2025-07-07T00:00:20.379937925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:00:20.382689 containerd[2083]: time="2025-07-07T00:00:20.382578587Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:20.391916 containerd[2083]: time="2025-07-07T00:00:20.391835329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:20.392926 containerd[2083]: time="2025-07-07T00:00:20.392536350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 7.405142022s" Jul 7 00:00:20.392926 containerd[2083]: time="2025-07-07T00:00:20.392569848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:00:20.394274 containerd[2083]: time="2025-07-07T00:00:20.394245138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:00:20.415639 containerd[2083]: time="2025-07-07T00:00:20.415572197Z" level=info msg="CreateContainer within sandbox \"4e266c228e2d2544c51b2c608c8c1d93b481fd9bf7ea0af8a6a40eb83e170b00\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:00:20.442061 containerd[2083]: time="2025-07-07T00:00:20.442013961Z" level=info msg="CreateContainer within sandbox \"4e266c228e2d2544c51b2c608c8c1d93b481fd9bf7ea0af8a6a40eb83e170b00\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f7b29d67497e7bb299653bb551fe52969d85dcf4e2d93ad63cf59b40ff7e8074\"" Jul 7 00:00:20.444159 containerd[2083]: time="2025-07-07T00:00:20.444054291Z" level=info msg="StartContainer for \"f7b29d67497e7bb299653bb551fe52969d85dcf4e2d93ad63cf59b40ff7e8074\"" Jul 7 00:00:20.542793 containerd[2083]: time="2025-07-07T00:00:20.542387542Z" level=info msg="StartContainer for \"f7b29d67497e7bb299653bb551fe52969d85dcf4e2d93ad63cf59b40ff7e8074\" returns successfully" Jul 7 00:00:21.039119 kubelet[3316]: E0707 00:00:21.039052 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:21.333845 kubelet[3316]: E0707 00:00:21.333080 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.333845 kubelet[3316]: W0707 00:00:21.333132 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.333845 kubelet[3316]: E0707 00:00:21.333329 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.333845 kubelet[3316]: E0707 00:00:21.333694 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.333845 kubelet[3316]: W0707 00:00:21.333704 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.333845 kubelet[3316]: E0707 00:00:21.333719 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.334152 kubelet[3316]: E0707 00:00:21.333916 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.334152 kubelet[3316]: W0707 00:00:21.333923 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.334152 kubelet[3316]: E0707 00:00:21.333931 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.334152 kubelet[3316]: E0707 00:00:21.334107 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.334152 kubelet[3316]: W0707 00:00:21.334114 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.334152 kubelet[3316]: E0707 00:00:21.334121 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.335136 kubelet[3316]: E0707 00:00:21.334307 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.335136 kubelet[3316]: W0707 00:00:21.334317 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.335136 kubelet[3316]: E0707 00:00:21.334327 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.335136 kubelet[3316]: E0707 00:00:21.334523 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.335136 kubelet[3316]: W0707 00:00:21.334531 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.335136 kubelet[3316]: E0707 00:00:21.334541 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.335136 kubelet[3316]: E0707 00:00:21.334756 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.335136 kubelet[3316]: W0707 00:00:21.334765 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.335136 kubelet[3316]: E0707 00:00:21.334776 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.335136 kubelet[3316]: E0707 00:00:21.334953 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.336623 kubelet[3316]: W0707 00:00:21.334960 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.336623 kubelet[3316]: E0707 00:00:21.334967 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.336623 kubelet[3316]: E0707 00:00:21.335142 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.336623 kubelet[3316]: W0707 00:00:21.335149 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.336623 kubelet[3316]: E0707 00:00:21.335156 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.336623 kubelet[3316]: E0707 00:00:21.335302 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.336623 kubelet[3316]: W0707 00:00:21.335310 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.336623 kubelet[3316]: E0707 00:00:21.335319 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.336623 kubelet[3316]: E0707 00:00:21.335472 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.336623 kubelet[3316]: W0707 00:00:21.335478 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.338087 kubelet[3316]: E0707 00:00:21.335485 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.338087 kubelet[3316]: E0707 00:00:21.335634 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.338087 kubelet[3316]: W0707 00:00:21.335640 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.338087 kubelet[3316]: E0707 00:00:21.335646 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.338087 kubelet[3316]: E0707 00:00:21.335805 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.338087 kubelet[3316]: W0707 00:00:21.335811 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.338087 kubelet[3316]: E0707 00:00:21.335819 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.338087 kubelet[3316]: E0707 00:00:21.336003 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.338087 kubelet[3316]: W0707 00:00:21.336012 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.338087 kubelet[3316]: E0707 00:00:21.336021 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.338487 kubelet[3316]: E0707 00:00:21.336191 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.338487 kubelet[3316]: W0707 00:00:21.336198 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.338487 kubelet[3316]: E0707 00:00:21.336205 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.359359 kubelet[3316]: E0707 00:00:21.359315 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.359359 kubelet[3316]: W0707 00:00:21.359346 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.359606 kubelet[3316]: E0707 00:00:21.359372 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.359839 kubelet[3316]: E0707 00:00:21.359814 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.359945 kubelet[3316]: W0707 00:00:21.359835 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.359945 kubelet[3316]: E0707 00:00:21.359868 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.360209 kubelet[3316]: E0707 00:00:21.360190 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.360209 kubelet[3316]: W0707 00:00:21.360208 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.360307 kubelet[3316]: E0707 00:00:21.360256 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.360604 kubelet[3316]: E0707 00:00:21.360583 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.360604 kubelet[3316]: W0707 00:00:21.360598 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.360947 kubelet[3316]: E0707 00:00:21.360617 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.360947 kubelet[3316]: E0707 00:00:21.360868 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.360947 kubelet[3316]: W0707 00:00:21.360882 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.360947 kubelet[3316]: E0707 00:00:21.360922 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.361334 kubelet[3316]: E0707 00:00:21.361315 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.361469 kubelet[3316]: W0707 00:00:21.361340 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.361469 kubelet[3316]: E0707 00:00:21.361403 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.361626 kubelet[3316]: E0707 00:00:21.361609 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.361626 kubelet[3316]: W0707 00:00:21.361624 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.361820 kubelet[3316]: E0707 00:00:21.361721 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.361929 kubelet[3316]: E0707 00:00:21.361913 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.361929 kubelet[3316]: W0707 00:00:21.361927 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.362035 kubelet[3316]: E0707 00:00:21.361962 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.362296 kubelet[3316]: E0707 00:00:21.362278 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.362296 kubelet[3316]: W0707 00:00:21.362292 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.362568 kubelet[3316]: E0707 00:00:21.362312 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.362878 kubelet[3316]: E0707 00:00:21.362858 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.362965 kubelet[3316]: W0707 00:00:21.362875 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.362965 kubelet[3316]: E0707 00:00:21.362912 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.363197 kubelet[3316]: E0707 00:00:21.363177 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.363197 kubelet[3316]: W0707 00:00:21.363193 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.363379 kubelet[3316]: E0707 00:00:21.363249 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.363430 kubelet[3316]: E0707 00:00:21.363410 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.363430 kubelet[3316]: W0707 00:00:21.363420 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.363728 kubelet[3316]: E0707 00:00:21.363460 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.363728 kubelet[3316]: E0707 00:00:21.363647 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.363728 kubelet[3316]: W0707 00:00:21.363675 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.363728 kubelet[3316]: E0707 00:00:21.363696 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.364009 kubelet[3316]: E0707 00:00:21.363992 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.364009 kubelet[3316]: W0707 00:00:21.364006 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.364109 kubelet[3316]: E0707 00:00:21.364040 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.364443 kubelet[3316]: E0707 00:00:21.364386 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.364443 kubelet[3316]: W0707 00:00:21.364401 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.364443 kubelet[3316]: E0707 00:00:21.364415 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.364686 kubelet[3316]: E0707 00:00:21.364654 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.364686 kubelet[3316]: W0707 00:00:21.364680 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.364935 kubelet[3316]: E0707 00:00:21.364701 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.365007 kubelet[3316]: E0707 00:00:21.364949 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.365007 kubelet[3316]: W0707 00:00:21.364960 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.365007 kubelet[3316]: E0707 00:00:21.364973 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.365610 kubelet[3316]: E0707 00:00:21.365592 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:21.365610 kubelet[3316]: W0707 00:00:21.365606 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:21.365766 kubelet[3316]: E0707 00:00:21.365620 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.334001 kubelet[3316]: I0707 00:00:22.333968 3316 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:00:22.343467 kubelet[3316]: E0707 00:00:22.343432 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.343467 kubelet[3316]: W0707 00:00:22.343456 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.343467 kubelet[3316]: E0707 00:00:22.343480 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.343786 kubelet[3316]: E0707 00:00:22.343771 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.343786 kubelet[3316]: W0707 00:00:22.343784 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.343846 kubelet[3316]: E0707 00:00:22.343799 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.344052 kubelet[3316]: E0707 00:00:22.344036 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.344117 kubelet[3316]: W0707 00:00:22.344049 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.344117 kubelet[3316]: E0707 00:00:22.344069 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.344296 kubelet[3316]: E0707 00:00:22.344282 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.344296 kubelet[3316]: W0707 00:00:22.344293 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.344387 kubelet[3316]: E0707 00:00:22.344302 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.344481 kubelet[3316]: E0707 00:00:22.344468 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.344481 kubelet[3316]: W0707 00:00:22.344478 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.344551 kubelet[3316]: E0707 00:00:22.344486 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.344640 kubelet[3316]: E0707 00:00:22.344628 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.344640 kubelet[3316]: W0707 00:00:22.344638 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.344736 kubelet[3316]: E0707 00:00:22.344645 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.344895 kubelet[3316]: E0707 00:00:22.344867 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.344895 kubelet[3316]: W0707 00:00:22.344890 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.344994 kubelet[3316]: E0707 00:00:22.344907 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.345360 kubelet[3316]: E0707 00:00:22.345327 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.345360 kubelet[3316]: W0707 00:00:22.345341 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.345712 kubelet[3316]: E0707 00:00:22.345490 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.346106 kubelet[3316]: E0707 00:00:22.345814 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.346106 kubelet[3316]: W0707 00:00:22.345828 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.346106 kubelet[3316]: E0707 00:00:22.345855 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.346106 kubelet[3316]: E0707 00:00:22.346075 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.346106 kubelet[3316]: W0707 00:00:22.346086 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.346106 kubelet[3316]: E0707 00:00:22.346098 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.346890 kubelet[3316]: E0707 00:00:22.346301 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.346890 kubelet[3316]: W0707 00:00:22.346312 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.346890 kubelet[3316]: E0707 00:00:22.346327 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.346890 kubelet[3316]: E0707 00:00:22.346536 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.346890 kubelet[3316]: W0707 00:00:22.346546 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.346890 kubelet[3316]: E0707 00:00:22.346559 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.346890 kubelet[3316]: E0707 00:00:22.346817 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.346890 kubelet[3316]: W0707 00:00:22.346828 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.346890 kubelet[3316]: E0707 00:00:22.346842 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.347292 kubelet[3316]: E0707 00:00:22.347051 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.347292 kubelet[3316]: W0707 00:00:22.347062 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.347292 kubelet[3316]: E0707 00:00:22.347073 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.347292 kubelet[3316]: E0707 00:00:22.347263 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.347423 kubelet[3316]: W0707 00:00:22.347302 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.347423 kubelet[3316]: E0707 00:00:22.347316 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.370331 kubelet[3316]: E0707 00:00:22.370266 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.370331 kubelet[3316]: W0707 00:00:22.370296 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.370331 kubelet[3316]: E0707 00:00:22.370319 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.370739 kubelet[3316]: E0707 00:00:22.370722 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.370739 kubelet[3316]: W0707 00:00:22.370737 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.370827 kubelet[3316]: E0707 00:00:22.370762 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.371021 kubelet[3316]: E0707 00:00:22.370994 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.371021 kubelet[3316]: W0707 00:00:22.371016 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.371183 kubelet[3316]: E0707 00:00:22.371035 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.371306 kubelet[3316]: E0707 00:00:22.371288 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.371383 kubelet[3316]: W0707 00:00:22.371310 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.371383 kubelet[3316]: E0707 00:00:22.371331 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.371591 kubelet[3316]: E0707 00:00:22.371576 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.371591 kubelet[3316]: W0707 00:00:22.371588 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.371722 kubelet[3316]: E0707 00:00:22.371608 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.371947 kubelet[3316]: E0707 00:00:22.371934 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.372129 kubelet[3316]: W0707 00:00:22.372009 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.372129 kubelet[3316]: E0707 00:00:22.372025 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.372339 kubelet[3316]: E0707 00:00:22.372324 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.372339 kubelet[3316]: W0707 00:00:22.372338 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.372401 kubelet[3316]: E0707 00:00:22.372353 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.372613 kubelet[3316]: E0707 00:00:22.372593 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.372613 kubelet[3316]: W0707 00:00:22.372603 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.372694 kubelet[3316]: E0707 00:00:22.372619 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.373006 kubelet[3316]: E0707 00:00:22.372861 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.373006 kubelet[3316]: W0707 00:00:22.372876 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.373006 kubelet[3316]: E0707 00:00:22.372893 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.373125 kubelet[3316]: E0707 00:00:22.373110 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.373338 kubelet[3316]: W0707 00:00:22.373120 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.373383 kubelet[3316]: E0707 00:00:22.373351 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.373630 kubelet[3316]: E0707 00:00:22.373614 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.373630 kubelet[3316]: W0707 00:00:22.373627 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.373954 kubelet[3316]: E0707 00:00:22.373761 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.373954 kubelet[3316]: E0707 00:00:22.373894 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.373954 kubelet[3316]: W0707 00:00:22.373901 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.373954 kubelet[3316]: E0707 00:00:22.373921 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.374082 kubelet[3316]: E0707 00:00:22.374045 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.374082 kubelet[3316]: W0707 00:00:22.374052 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.374082 kubelet[3316]: E0707 00:00:22.374065 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.374269 kubelet[3316]: E0707 00:00:22.374252 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.374269 kubelet[3316]: W0707 00:00:22.374264 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.374377 kubelet[3316]: E0707 00:00:22.374285 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.374505 kubelet[3316]: E0707 00:00:22.374487 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.374505 kubelet[3316]: W0707 00:00:22.374500 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.374600 kubelet[3316]: E0707 00:00:22.374520 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.374973 kubelet[3316]: E0707 00:00:22.374751 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.374973 kubelet[3316]: W0707 00:00:22.374763 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.374973 kubelet[3316]: E0707 00:00:22.374780 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.375073 kubelet[3316]: E0707 00:00:22.375060 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.375073 kubelet[3316]: W0707 00:00:22.375071 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.375122 kubelet[3316]: E0707 00:00:22.375095 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:22.375323 kubelet[3316]: E0707 00:00:22.375309 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:22.375323 kubelet[3316]: W0707 00:00:22.375321 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:22.375386 kubelet[3316]: E0707 00:00:22.375329 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:23.039619 kubelet[3316]: E0707 00:00:23.039448 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:25.039252 kubelet[3316]: E0707 00:00:25.038652 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:26.262549 containerd[2083]: time="2025-07-07T00:00:26.262478505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:26.264319 containerd[2083]: time="2025-07-07T00:00:26.264233111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:00:26.268138 containerd[2083]: time="2025-07-07T00:00:26.267316169Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:26.270237 containerd[2083]: time="2025-07-07T00:00:26.270169844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:26.271008 containerd[2083]: time="2025-07-07T00:00:26.270870404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 5.876586351s" Jul 7 00:00:26.271008 containerd[2083]: time="2025-07-07T00:00:26.270909688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:00:26.273931 containerd[2083]: time="2025-07-07T00:00:26.273883963Z" level=info msg="CreateContainer within sandbox \"67a583cf8a3946c6bd5a85b98d3df2c6b071ee28d57f0b9033669acf9cf9ac1c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:00:26.298108 containerd[2083]: time="2025-07-07T00:00:26.298039526Z" level=info msg="CreateContainer within sandbox \"67a583cf8a3946c6bd5a85b98d3df2c6b071ee28d57f0b9033669acf9cf9ac1c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"42fe3a6566b9a43055aad51164e870c48b05359a40b54b973ae7f4ed66c95ff4\"" Jul 7 00:00:26.299094 containerd[2083]: time="2025-07-07T00:00:26.298929319Z" level=info msg="StartContainer for \"42fe3a6566b9a43055aad51164e870c48b05359a40b54b973ae7f4ed66c95ff4\"" Jul 7 00:00:26.411254 containerd[2083]: time="2025-07-07T00:00:26.411015233Z" level=info msg="StartContainer for \"42fe3a6566b9a43055aad51164e870c48b05359a40b54b973ae7f4ed66c95ff4\" returns successfully" Jul 7 00:00:26.470054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42fe3a6566b9a43055aad51164e870c48b05359a40b54b973ae7f4ed66c95ff4-rootfs.mount: Deactivated successfully. Jul 7 00:00:26.522254 containerd[2083]: time="2025-07-07T00:00:26.507759084Z" level=info msg="shim disconnected" id=42fe3a6566b9a43055aad51164e870c48b05359a40b54b973ae7f4ed66c95ff4 namespace=k8s.io Jul 7 00:00:26.522254 containerd[2083]: time="2025-07-07T00:00:26.522163667Z" level=warning msg="cleaning up after shim disconnected" id=42fe3a6566b9a43055aad51164e870c48b05359a40b54b973ae7f4ed66c95ff4 namespace=k8s.io Jul 7 00:00:26.522254 containerd[2083]: time="2025-07-07T00:00:26.522184608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:00:27.040766 kubelet[3316]: E0707 00:00:27.039525 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:27.389553 containerd[2083]: time="2025-07-07T00:00:27.388603497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:00:27.415586 kubelet[3316]: I0707 00:00:27.412388 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8567cd9d8b-9hf8b" podStartSLOduration=8.005293365 podStartE2EDuration="15.412365441s" podCreationTimestamp="2025-07-07 00:00:12 +0000 UTC" firstStartedPulling="2025-07-07 00:00:12.986605598 +0000 UTC m=+26.136031435" lastFinishedPulling="2025-07-07 00:00:20.39367767 +0000 UTC m=+33.543103511" observedRunningTime="2025-07-07 00:00:21.350486393 +0000 UTC m=+34.499912247" watchObservedRunningTime="2025-07-07 00:00:27.412365441 +0000 UTC m=+40.561791294" Jul 7 00:00:29.039005 kubelet[3316]: E0707 00:00:29.038956 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:31.040788 kubelet[3316]: E0707 00:00:31.040353 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:32.382988 containerd[2083]: time="2025-07-07T00:00:32.382934283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:32.385341 containerd[2083]: time="2025-07-07T00:00:32.384909046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:00:32.387909 containerd[2083]: time="2025-07-07T00:00:32.387841087Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:32.393838 containerd[2083]: time="2025-07-07T00:00:32.393771592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:32.395597 containerd[2083]: time="2025-07-07T00:00:32.394817412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.005260466s" Jul 7 00:00:32.395597 containerd[2083]: time="2025-07-07T00:00:32.394851094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:00:32.397379 containerd[2083]: time="2025-07-07T00:00:32.397343066Z" level=info msg="CreateContainer within sandbox \"67a583cf8a3946c6bd5a85b98d3df2c6b071ee28d57f0b9033669acf9cf9ac1c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:00:32.431302 containerd[2083]: time="2025-07-07T00:00:32.431139194Z" level=info msg="CreateContainer within sandbox \"67a583cf8a3946c6bd5a85b98d3df2c6b071ee28d57f0b9033669acf9cf9ac1c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"238fb93cd34ff3c5bb6fc1c53055a5987f16226df5871f03dd30a3ed25ffe078\"" Jul 7 00:00:32.433707 containerd[2083]: time="2025-07-07T00:00:32.432143105Z" level=info msg="StartContainer for \"238fb93cd34ff3c5bb6fc1c53055a5987f16226df5871f03dd30a3ed25ffe078\"" Jul 7 00:00:32.522714 containerd[2083]: time="2025-07-07T00:00:32.522633443Z" level=info msg="StartContainer for \"238fb93cd34ff3c5bb6fc1c53055a5987f16226df5871f03dd30a3ed25ffe078\" returns successfully" Jul 7 00:00:33.040125 kubelet[3316]: E0707 00:00:33.039445 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:33.638036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-238fb93cd34ff3c5bb6fc1c53055a5987f16226df5871f03dd30a3ed25ffe078-rootfs.mount: Deactivated successfully. Jul 7 00:00:33.645441 kubelet[3316]: I0707 00:00:33.645358 3316 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 00:00:33.681723 containerd[2083]: time="2025-07-07T00:00:33.681624080Z" level=info msg="shim disconnected" id=238fb93cd34ff3c5bb6fc1c53055a5987f16226df5871f03dd30a3ed25ffe078 namespace=k8s.io Jul 7 00:00:33.681723 containerd[2083]: time="2025-07-07T00:00:33.681717885Z" level=warning msg="cleaning up after shim disconnected" id=238fb93cd34ff3c5bb6fc1c53055a5987f16226df5871f03dd30a3ed25ffe078 namespace=k8s.io Jul 7 00:00:33.681723 containerd[2083]: time="2025-07-07T00:00:33.681731676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:00:33.723508 containerd[2083]: time="2025-07-07T00:00:33.722922367Z" level=warning msg="cleanup warnings time=\"2025-07-07T00:00:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 00:00:33.762828 kubelet[3316]: I0707 00:00:33.762166 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4l2t\" (UniqueName: \"kubernetes.io/projected/4fa34b58-0d0d-481f-8d32-2a1b40537372-kube-api-access-x4l2t\") pod \"whisker-7446b747d4-twlf4\" (UID: \"4fa34b58-0d0d-481f-8d32-2a1b40537372\") " pod="calico-system/whisker-7446b747d4-twlf4" Jul 7 00:00:33.762828 kubelet[3316]: I0707 00:00:33.762210 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3256e36e-3dfd-4340-92ad-002ae5ad9541-tigera-ca-bundle\") pod \"calico-kube-controllers-d699df5cb-rvx8c\" (UID: \"3256e36e-3dfd-4340-92ad-002ae5ad9541\") " pod="calico-system/calico-kube-controllers-d699df5cb-rvx8c" Jul 7 00:00:33.762828 kubelet[3316]: I0707 00:00:33.762234 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5gkk\" (UniqueName: \"kubernetes.io/projected/3bf78fb3-72f6-471c-b914-66a504f5315e-kube-api-access-c5gkk\") pod \"coredns-7c65d6cfc9-p6qwp\" (UID: \"3bf78fb3-72f6-471c-b914-66a504f5315e\") " pod="kube-system/coredns-7c65d6cfc9-p6qwp" Jul 7 00:00:33.762828 kubelet[3316]: I0707 00:00:33.762256 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fa34b58-0d0d-481f-8d32-2a1b40537372-whisker-backend-key-pair\") pod \"whisker-7446b747d4-twlf4\" (UID: \"4fa34b58-0d0d-481f-8d32-2a1b40537372\") " pod="calico-system/whisker-7446b747d4-twlf4" Jul 7 00:00:33.762828 kubelet[3316]: I0707 00:00:33.762276 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa34b58-0d0d-481f-8d32-2a1b40537372-whisker-ca-bundle\") pod \"whisker-7446b747d4-twlf4\" (UID: \"4fa34b58-0d0d-481f-8d32-2a1b40537372\") " pod="calico-system/whisker-7446b747d4-twlf4" Jul 7 00:00:33.763113 kubelet[3316]: I0707 00:00:33.762297 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/833c9c7e-23d5-495b-bc31-3bfc82fc6450-calico-apiserver-certs\") pod \"calico-apiserver-674b869996-5z2gh\" (UID: \"833c9c7e-23d5-495b-bc31-3bfc82fc6450\") " pod="calico-apiserver/calico-apiserver-674b869996-5z2gh" Jul 7 00:00:33.763113 kubelet[3316]: I0707 00:00:33.762322 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrk5\" (UniqueName: \"kubernetes.io/projected/c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb-kube-api-access-fxrk5\") pod \"calico-apiserver-674b869996-75pq4\" (UID: \"c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb\") " pod="calico-apiserver/calico-apiserver-674b869996-75pq4" Jul 7 00:00:33.763113 kubelet[3316]: I0707 00:00:33.762340 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bf78fb3-72f6-471c-b914-66a504f5315e-config-volume\") pod \"coredns-7c65d6cfc9-p6qwp\" (UID: \"3bf78fb3-72f6-471c-b914-66a504f5315e\") " pod="kube-system/coredns-7c65d6cfc9-p6qwp" Jul 7 00:00:33.763113 kubelet[3316]: I0707 00:00:33.762360 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfx26\" (UniqueName: \"kubernetes.io/projected/2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e-kube-api-access-vfx26\") pod \"coredns-7c65d6cfc9-xlnl6\" (UID: \"2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e\") " pod="kube-system/coredns-7c65d6cfc9-xlnl6" Jul 7 00:00:33.763113 kubelet[3316]: I0707 00:00:33.762382 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe-config\") pod \"goldmane-58fd7646b9-xq9q9\" (UID: \"561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe\") " pod="calico-system/goldmane-58fd7646b9-xq9q9" Jul 7 00:00:33.763247 kubelet[3316]: I0707 00:00:33.762401 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbxkj\" (UniqueName: \"kubernetes.io/projected/561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe-kube-api-access-jbxkj\") pod \"goldmane-58fd7646b9-xq9q9\" (UID: \"561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe\") " pod="calico-system/goldmane-58fd7646b9-xq9q9" Jul 7 00:00:33.763247 kubelet[3316]: I0707 00:00:33.762424 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb-calico-apiserver-certs\") pod \"calico-apiserver-674b869996-75pq4\" (UID: \"c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb\") " pod="calico-apiserver/calico-apiserver-674b869996-75pq4" Jul 7 00:00:33.763247 kubelet[3316]: I0707 00:00:33.762440 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe-goldmane-key-pair\") pod \"goldmane-58fd7646b9-xq9q9\" (UID: \"561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe\") " pod="calico-system/goldmane-58fd7646b9-xq9q9" Jul 7 00:00:33.763247 kubelet[3316]: I0707 00:00:33.762460 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dd4k\" (UniqueName: \"kubernetes.io/projected/3256e36e-3dfd-4340-92ad-002ae5ad9541-kube-api-access-6dd4k\") pod \"calico-kube-controllers-d699df5cb-rvx8c\" (UID: \"3256e36e-3dfd-4340-92ad-002ae5ad9541\") " pod="calico-system/calico-kube-controllers-d699df5cb-rvx8c" Jul 7 00:00:33.763247 kubelet[3316]: I0707 00:00:33.762481 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47jfl\" (UniqueName: \"kubernetes.io/projected/833c9c7e-23d5-495b-bc31-3bfc82fc6450-kube-api-access-47jfl\") pod \"calico-apiserver-674b869996-5z2gh\" (UID: \"833c9c7e-23d5-495b-bc31-3bfc82fc6450\") " pod="calico-apiserver/calico-apiserver-674b869996-5z2gh" Jul 7 00:00:33.763375 kubelet[3316]: I0707 00:00:33.762502 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-xq9q9\" (UID: \"561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe\") " pod="calico-system/goldmane-58fd7646b9-xq9q9" Jul 7 00:00:33.763375 kubelet[3316]: I0707 00:00:33.762524 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e-config-volume\") pod \"coredns-7c65d6cfc9-xlnl6\" (UID: \"2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e\") " pod="kube-system/coredns-7c65d6cfc9-xlnl6" Jul 7 00:00:34.028794 containerd[2083]: time="2025-07-07T00:00:34.028677569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xlnl6,Uid:2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e,Namespace:kube-system,Attempt:0,}" Jul 7 00:00:34.037599 containerd[2083]: time="2025-07-07T00:00:34.037544388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p6qwp,Uid:3bf78fb3-72f6-471c-b914-66a504f5315e,Namespace:kube-system,Attempt:0,}" Jul 7 00:00:34.039549 containerd[2083]: time="2025-07-07T00:00:34.039376350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674b869996-75pq4,Uid:c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:00:34.043906 containerd[2083]: time="2025-07-07T00:00:34.043634466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7446b747d4-twlf4,Uid:4fa34b58-0d0d-481f-8d32-2a1b40537372,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:34.043906 containerd[2083]: time="2025-07-07T00:00:34.043739622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d699df5cb-rvx8c,Uid:3256e36e-3dfd-4340-92ad-002ae5ad9541,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:34.043906 containerd[2083]: time="2025-07-07T00:00:34.043642676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674b869996-5z2gh,Uid:833c9c7e-23d5-495b-bc31-3bfc82fc6450,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:00:34.060066 containerd[2083]: time="2025-07-07T00:00:34.060017848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-xq9q9,Uid:561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:34.478996 containerd[2083]: time="2025-07-07T00:00:34.478104351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:00:34.619697 containerd[2083]: time="2025-07-07T00:00:34.619562833Z" level=error msg="Failed to destroy network for sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.621546 containerd[2083]: time="2025-07-07T00:00:34.621393183Z" level=error msg="Failed to destroy network for sandbox \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.639205 containerd[2083]: time="2025-07-07T00:00:34.632562270Z" level=error msg="encountered an error cleaning up failed sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.641710 containerd[2083]: time="2025-07-07T00:00:34.632569593Z" level=error msg="encountered an error cleaning up failed sandbox \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.675157 containerd[2083]: time="2025-07-07T00:00:34.675106771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-xq9q9,Uid:561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.682775 containerd[2083]: time="2025-07-07T00:00:34.632788105Z" level=error msg="Failed to destroy network for sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.682775 containerd[2083]: time="2025-07-07T00:00:34.679997367Z" level=error msg="encountered an error cleaning up failed sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.682775 containerd[2083]: time="2025-07-07T00:00:34.680060666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7446b747d4-twlf4,Uid:4fa34b58-0d0d-481f-8d32-2a1b40537372,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.683872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729-shm.mount: Deactivated successfully. Jul 7 00:00:34.697113 kubelet[3316]: E0707 00:00:34.692456 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.697113 kubelet[3316]: E0707 00:00:34.692545 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7446b747d4-twlf4" Jul 7 00:00:34.697113 kubelet[3316]: E0707 00:00:34.692577 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7446b747d4-twlf4" Jul 7 00:00:34.699449 containerd[2083]: time="2025-07-07T00:00:34.691632339Z" level=error msg="Failed to destroy network for sandbox \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.699449 containerd[2083]: time="2025-07-07T00:00:34.692112670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674b869996-75pq4,Uid:c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.699449 containerd[2083]: time="2025-07-07T00:00:34.696029669Z" level=error msg="encountered an error cleaning up failed sandbox \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.699449 containerd[2083]: time="2025-07-07T00:00:34.696116294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d699df5cb-rvx8c,Uid:3256e36e-3dfd-4340-92ad-002ae5ad9541,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.699449 containerd[2083]: time="2025-07-07T00:00:34.632834535Z" level=error msg="Failed to destroy network for sandbox \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.699449 containerd[2083]: time="2025-07-07T00:00:34.696511820Z" level=error msg="encountered an error cleaning up failed sandbox \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.699449 containerd[2083]: time="2025-07-07T00:00:34.696562216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p6qwp,Uid:3bf78fb3-72f6-471c-b914-66a504f5315e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.699449 containerd[2083]: time="2025-07-07T00:00:34.691917623Z" level=error msg="Failed to destroy network for sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.699449 containerd[2083]: time="2025-07-07T00:00:34.699101588Z" level=error msg="encountered an error cleaning up failed sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.699449 containerd[2083]: time="2025-07-07T00:00:34.699175573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674b869996-5z2gh,Uid:833c9c7e-23d5-495b-bc31-3bfc82fc6450,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.698232 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283-shm.mount: Deactivated successfully. Jul 7 00:00:34.709824 kubelet[3316]: E0707 00:00:34.692637 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7446b747d4-twlf4_calico-system(4fa34b58-0d0d-481f-8d32-2a1b40537372)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7446b747d4-twlf4_calico-system(4fa34b58-0d0d-481f-8d32-2a1b40537372)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7446b747d4-twlf4" podUID="4fa34b58-0d0d-481f-8d32-2a1b40537372" Jul 7 00:00:34.709824 kubelet[3316]: E0707 00:00:34.692863 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.709824 kubelet[3316]: E0707 00:00:34.692922 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-xq9q9" Jul 7 00:00:34.704311 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e-shm.mount: Deactivated successfully. Jul 7 00:00:34.710164 kubelet[3316]: E0707 00:00:34.692948 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-xq9q9" Jul 7 00:00:34.710164 kubelet[3316]: E0707 00:00:34.692991 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-xq9q9_calico-system(561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-xq9q9_calico-system(561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-xq9q9" podUID="561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe" Jul 7 00:00:34.710164 kubelet[3316]: E0707 00:00:34.705358 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.704523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559-shm.mount: Deactivated successfully. Jul 7 00:00:34.710454 kubelet[3316]: E0707 00:00:34.705428 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674b869996-5z2gh" Jul 7 00:00:34.710454 kubelet[3316]: E0707 00:00:34.705462 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674b869996-5z2gh" Jul 7 00:00:34.710454 kubelet[3316]: E0707 00:00:34.705517 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-674b869996-5z2gh_calico-apiserver(833c9c7e-23d5-495b-bc31-3bfc82fc6450)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-674b869996-5z2gh_calico-apiserver(833c9c7e-23d5-495b-bc31-3bfc82fc6450)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674b869996-5z2gh" podUID="833c9c7e-23d5-495b-bc31-3bfc82fc6450" Jul 7 00:00:34.710614 kubelet[3316]: E0707 00:00:34.705578 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.710614 kubelet[3316]: E0707 00:00:34.705607 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674b869996-75pq4" Jul 7 00:00:34.710614 kubelet[3316]: E0707 00:00:34.705627 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674b869996-75pq4" Jul 7 00:00:34.710614 kubelet[3316]: E0707 00:00:34.709210 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.710829 kubelet[3316]: E0707 00:00:34.709270 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d699df5cb-rvx8c" Jul 7 00:00:34.710829 kubelet[3316]: E0707 00:00:34.709300 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d699df5cb-rvx8c" Jul 7 00:00:34.710829 kubelet[3316]: E0707 00:00:34.709358 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d699df5cb-rvx8c_calico-system(3256e36e-3dfd-4340-92ad-002ae5ad9541)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d699df5cb-rvx8c_calico-system(3256e36e-3dfd-4340-92ad-002ae5ad9541)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d699df5cb-rvx8c" podUID="3256e36e-3dfd-4340-92ad-002ae5ad9541" Jul 7 00:00:34.711006 kubelet[3316]: E0707 00:00:34.709419 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.711006 kubelet[3316]: E0707 00:00:34.709444 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p6qwp" Jul 7 00:00:34.711006 kubelet[3316]: E0707 00:00:34.709464 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p6qwp" Jul 7 00:00:34.711159 kubelet[3316]: E0707 00:00:34.709497 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-p6qwp_kube-system(3bf78fb3-72f6-471c-b914-66a504f5315e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-p6qwp_kube-system(3bf78fb3-72f6-471c-b914-66a504f5315e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p6qwp" podUID="3bf78fb3-72f6-471c-b914-66a504f5315e" Jul 7 00:00:34.711159 kubelet[3316]: E0707 00:00:34.709716 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-674b869996-75pq4_calico-apiserver(c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-674b869996-75pq4_calico-apiserver(c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674b869996-75pq4" podUID="c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb" Jul 7 00:00:34.713900 containerd[2083]: time="2025-07-07T00:00:34.713722633Z" level=error msg="Failed to destroy network for sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.716939 containerd[2083]: time="2025-07-07T00:00:34.716879539Z" level=error msg="encountered an error cleaning up failed sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.717707 containerd[2083]: time="2025-07-07T00:00:34.717106662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xlnl6,Uid:2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.718231 kubelet[3316]: E0707 00:00:34.717415 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.718231 kubelet[3316]: E0707 00:00:34.717484 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xlnl6" Jul 7 00:00:34.718231 kubelet[3316]: E0707 00:00:34.717510 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xlnl6" Jul 7 00:00:34.718388 kubelet[3316]: E0707 00:00:34.717574 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-xlnl6_kube-system(2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-xlnl6_kube-system(2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xlnl6" podUID="2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e" Jul 7 00:00:34.719910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b-shm.mount: Deactivated successfully. Jul 7 00:00:35.043651 containerd[2083]: time="2025-07-07T00:00:35.043079840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmlwg,Uid:fd3bd012-86e5-4807-95d5-ad6901284597,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:35.116316 containerd[2083]: time="2025-07-07T00:00:35.116246403Z" level=error msg="Failed to destroy network for sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.116676 containerd[2083]: time="2025-07-07T00:00:35.116623894Z" level=error msg="encountered an error cleaning up failed sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.116818 containerd[2083]: time="2025-07-07T00:00:35.116710498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmlwg,Uid:fd3bd012-86e5-4807-95d5-ad6901284597,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.117039 kubelet[3316]: E0707 00:00:35.116995 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.117307 kubelet[3316]: E0707 00:00:35.117064 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vmlwg" Jul 7 00:00:35.117307 kubelet[3316]: E0707 00:00:35.117092 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vmlwg" Jul 7 00:00:35.117307 kubelet[3316]: E0707 00:00:35.117248 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vmlwg_calico-system(fd3bd012-86e5-4807-95d5-ad6901284597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vmlwg_calico-system(fd3bd012-86e5-4807-95d5-ad6901284597)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:35.450468 kubelet[3316]: I0707 00:00:35.450434 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:00:35.452225 kubelet[3316]: I0707 00:00:35.452195 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:00:35.486015 kubelet[3316]: I0707 00:00:35.485977 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:00:35.487953 kubelet[3316]: I0707 00:00:35.487706 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" Jul 7 00:00:35.490249 kubelet[3316]: I0707 00:00:35.489894 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:35.493321 kubelet[3316]: I0707 00:00:35.492362 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" Jul 7 00:00:35.496213 kubelet[3316]: I0707 00:00:35.496184 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" Jul 7 00:00:35.506508 containerd[2083]: time="2025-07-07T00:00:35.505075796Z" level=info msg="StopPodSandbox for \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\"" Jul 7 00:00:35.506776 containerd[2083]: time="2025-07-07T00:00:35.506733830Z" level=info msg="StopPodSandbox for \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\"" Jul 7 00:00:35.507703 containerd[2083]: time="2025-07-07T00:00:35.507642178Z" level=info msg="Ensure that sandbox fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e in task-service has been cleanup successfully" Jul 7 00:00:35.508117 containerd[2083]: time="2025-07-07T00:00:35.508084076Z" level=info msg="Ensure that sandbox 2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283 in task-service has been cleanup successfully" Jul 7 00:00:35.510143 containerd[2083]: time="2025-07-07T00:00:35.510098945Z" level=info msg="StopPodSandbox for \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\"" Jul 7 00:00:35.510384 containerd[2083]: time="2025-07-07T00:00:35.510337895Z" level=info msg="StopPodSandbox for \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\"" Jul 7 00:00:35.510723 containerd[2083]: time="2025-07-07T00:00:35.510698097Z" level=info msg="Ensure that sandbox 80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56 in task-service has been cleanup successfully" Jul 7 00:00:35.516002 containerd[2083]: time="2025-07-07T00:00:35.510963580Z" level=info msg="StopPodSandbox for \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\"" Jul 7 00:00:35.516683 containerd[2083]: time="2025-07-07T00:00:35.512785171Z" level=info msg="StopPodSandbox for \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\"" Jul 7 00:00:35.516683 containerd[2083]: time="2025-07-07T00:00:35.516444939Z" level=info msg="Ensure that sandbox 8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559 in task-service has been cleanup successfully" Jul 7 00:00:35.519757 containerd[2083]: time="2025-07-07T00:00:35.519718690Z" level=info msg="Ensure that sandbox 1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92 in task-service has been cleanup successfully" Jul 7 00:00:35.521986 kubelet[3316]: I0707 00:00:35.521947 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:00:35.526213 containerd[2083]: time="2025-07-07T00:00:35.524113769Z" level=info msg="StopPodSandbox for \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\"" Jul 7 00:00:35.538740 containerd[2083]: time="2025-07-07T00:00:35.537977054Z" level=info msg="Ensure that sandbox 8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b in task-service has been cleanup successfully" Jul 7 00:00:35.553062 containerd[2083]: time="2025-07-07T00:00:35.511694250Z" level=info msg="Ensure that sandbox 7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729 in task-service has been cleanup successfully" Jul 7 00:00:35.553963 containerd[2083]: time="2025-07-07T00:00:35.511854665Z" level=info msg="StopPodSandbox for \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\"" Jul 7 00:00:35.559224 containerd[2083]: time="2025-07-07T00:00:35.559044040Z" level=info msg="Ensure that sandbox f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b in task-service has been cleanup successfully" Jul 7 00:00:35.620040 containerd[2083]: time="2025-07-07T00:00:35.619975316Z" level=error msg="StopPodSandbox for \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\" failed" error="failed to destroy network for sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.620980 kubelet[3316]: E0707 00:00:35.620835 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:00:35.635484 kubelet[3316]: E0707 00:00:35.620911 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e"} Jul 7 00:00:35.635484 kubelet[3316]: E0707 00:00:35.633415 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"833c9c7e-23d5-495b-bc31-3bfc82fc6450\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:35.635484 kubelet[3316]: E0707 00:00:35.633452 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"833c9c7e-23d5-495b-bc31-3bfc82fc6450\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674b869996-5z2gh" podUID="833c9c7e-23d5-495b-bc31-3bfc82fc6450" Jul 7 00:00:35.723780 containerd[2083]: time="2025-07-07T00:00:35.722477646Z" level=error msg="StopPodSandbox for \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\" failed" error="failed to destroy network for sandbox \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.724279 kubelet[3316]: E0707 00:00:35.722869 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" Jul 7 00:00:35.724279 kubelet[3316]: E0707 00:00:35.722946 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559"} Jul 7 00:00:35.724279 kubelet[3316]: E0707 00:00:35.723004 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3bf78fb3-72f6-471c-b914-66a504f5315e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:35.724279 kubelet[3316]: E0707 00:00:35.723037 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3bf78fb3-72f6-471c-b914-66a504f5315e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p6qwp" podUID="3bf78fb3-72f6-471c-b914-66a504f5315e" Jul 7 00:00:35.728930 containerd[2083]: time="2025-07-07T00:00:35.728876128Z" level=error msg="StopPodSandbox for \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\" failed" error="failed to destroy network for sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.729470 kubelet[3316]: E0707 00:00:35.729418 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:00:35.729580 kubelet[3316]: E0707 00:00:35.729492 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92"} Jul 7 00:00:35.729580 kubelet[3316]: E0707 00:00:35.729544 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:35.729770 kubelet[3316]: E0707 00:00:35.729581 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-xq9q9" podUID="561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe" Jul 7 00:00:35.756716 containerd[2083]: time="2025-07-07T00:00:35.756636815Z" level=error msg="StopPodSandbox for \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\" failed" error="failed to destroy network for sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.757314 kubelet[3316]: E0707 00:00:35.757260 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:00:35.757718 kubelet[3316]: E0707 00:00:35.757586 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56"} Jul 7 00:00:35.757718 kubelet[3316]: E0707 00:00:35.757644 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd3bd012-86e5-4807-95d5-ad6901284597\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:35.758358 kubelet[3316]: E0707 00:00:35.758305 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd3bd012-86e5-4807-95d5-ad6901284597\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vmlwg" podUID="fd3bd012-86e5-4807-95d5-ad6901284597" Jul 7 00:00:35.762718 containerd[2083]: time="2025-07-07T00:00:35.762638901Z" level=error msg="StopPodSandbox for \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\" failed" error="failed to destroy network for sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.763359 kubelet[3316]: E0707 00:00:35.763152 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:35.763359 kubelet[3316]: E0707 00:00:35.763225 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729"} Jul 7 00:00:35.763359 kubelet[3316]: E0707 00:00:35.763272 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fa34b58-0d0d-481f-8d32-2a1b40537372\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:35.763359 kubelet[3316]: E0707 00:00:35.763307 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fa34b58-0d0d-481f-8d32-2a1b40537372\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7446b747d4-twlf4" podUID="4fa34b58-0d0d-481f-8d32-2a1b40537372" Jul 7 00:00:35.767420 containerd[2083]: time="2025-07-07T00:00:35.767365805Z" level=error msg="StopPodSandbox for \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\" failed" error="failed to destroy network for sandbox \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.768175 containerd[2083]: time="2025-07-07T00:00:35.767583294Z" level=error msg="StopPodSandbox for \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\" failed" error="failed to destroy network for sandbox \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.768236 kubelet[3316]: E0707 00:00:35.767702 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" Jul 7 00:00:35.768236 kubelet[3316]: E0707 00:00:35.767772 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283"} Jul 7 00:00:35.768236 kubelet[3316]: E0707 00:00:35.767820 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3256e36e-3dfd-4340-92ad-002ae5ad9541\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:35.768236 kubelet[3316]: E0707 00:00:35.767861 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3256e36e-3dfd-4340-92ad-002ae5ad9541\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d699df5cb-rvx8c" podUID="3256e36e-3dfd-4340-92ad-002ae5ad9541" Jul 7 00:00:35.768538 kubelet[3316]: E0707 00:00:35.767954 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" Jul 7 00:00:35.768538 kubelet[3316]: E0707 00:00:35.767987 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b"} Jul 7 00:00:35.768538 kubelet[3316]: E0707 00:00:35.768020 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:35.768538 kubelet[3316]: E0707 00:00:35.768049 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674b869996-75pq4" podUID="c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb" Jul 7 00:00:35.769997 containerd[2083]: time="2025-07-07T00:00:35.769952055Z" level=error msg="StopPodSandbox for \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\" failed" error="failed to destroy network for sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:35.770212 kubelet[3316]: E0707 00:00:35.770172 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:00:35.770300 kubelet[3316]: E0707 00:00:35.770225 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b"} Jul 7 00:00:35.770300 kubelet[3316]: E0707 00:00:35.770274 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:35.770428 kubelet[3316]: E0707 00:00:35.770306 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xlnl6" podUID="2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e" Jul 7 00:00:37.524024 systemd[1]: Started sshd@7-172.31.19.107:22-147.75.109.163:42600.service - OpenSSH per-connection server daemon (147.75.109.163:42600). Jul 7 00:00:37.803866 sshd[4663]: Accepted publickey for core from 147.75.109.163 port 42600 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:37.812607 sshd[4663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:37.837205 systemd-logind[2061]: New session 8 of user core. Jul 7 00:00:37.845436 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:00:38.168383 sshd[4663]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:38.176228 systemd[1]: sshd@7-172.31.19.107:22-147.75.109.163:42600.service: Deactivated successfully. Jul 7 00:00:38.184060 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:00:38.185508 systemd-logind[2061]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:00:38.189338 systemd-logind[2061]: Removed session 8. Jul 7 00:00:39.995921 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:00:39.999296 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:00:39.995973 systemd-resolved[1982]: Flushed all caches. Jul 7 00:00:40.793686 kubelet[3316]: I0707 00:00:40.793536 3316 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:00:42.846331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268290172.mount: Deactivated successfully. Jul 7 00:00:42.950173 containerd[2083]: time="2025-07-07T00:00:42.937916241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:00:42.976344 containerd[2083]: time="2025-07-07T00:00:42.976260898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.476630932s" Jul 7 00:00:42.976701 containerd[2083]: time="2025-07-07T00:00:42.976672389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:00:42.990935 containerd[2083]: time="2025-07-07T00:00:42.990850001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:43.054483 containerd[2083]: time="2025-07-07T00:00:43.054434261Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:43.055523 containerd[2083]: time="2025-07-07T00:00:43.055483715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:43.086419 containerd[2083]: time="2025-07-07T00:00:43.086356775Z" level=info msg="CreateContainer within sandbox \"67a583cf8a3946c6bd5a85b98d3df2c6b071ee28d57f0b9033669acf9cf9ac1c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:00:43.179104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630494364.mount: Deactivated successfully. Jul 7 00:00:43.191250 containerd[2083]: time="2025-07-07T00:00:43.191197527Z" level=info msg="CreateContainer within sandbox \"67a583cf8a3946c6bd5a85b98d3df2c6b071ee28d57f0b9033669acf9cf9ac1c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9c27421f980fe44badff996becf84f24ac2f603a8554225c00c3fbcd748fd532\"" Jul 7 00:00:43.194363 containerd[2083]: time="2025-07-07T00:00:43.193888636Z" level=info msg="StartContainer for \"9c27421f980fe44badff996becf84f24ac2f603a8554225c00c3fbcd748fd532\"" Jul 7 00:00:43.196288 systemd[1]: Started sshd@8-172.31.19.107:22-147.75.109.163:42614.service - OpenSSH per-connection server daemon (147.75.109.163:42614). Jul 7 00:00:43.486627 containerd[2083]: time="2025-07-07T00:00:43.485022341Z" level=info msg="StartContainer for \"9c27421f980fe44badff996becf84f24ac2f603a8554225c00c3fbcd748fd532\" returns successfully" Jul 7 00:00:43.509347 sshd[4688]: Accepted publickey for core from 147.75.109.163 port 42614 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:43.514368 sshd[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:43.523736 systemd-logind[2061]: New session 9 of user core. Jul 7 00:00:43.528876 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:00:43.728389 kubelet[3316]: I0707 00:00:43.683953 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xlct9" podStartSLOduration=2.086006545 podStartE2EDuration="31.652323845s" podCreationTimestamp="2025-07-07 00:00:12 +0000 UTC" firstStartedPulling="2025-07-07 00:00:13.411455803 +0000 UTC m=+26.560881638" lastFinishedPulling="2025-07-07 00:00:42.977773107 +0000 UTC m=+56.127198938" observedRunningTime="2025-07-07 00:00:43.65096129 +0000 UTC m=+56.800387141" watchObservedRunningTime="2025-07-07 00:00:43.652323845 +0000 UTC m=+56.801749696" Jul 7 00:00:44.029915 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:00:44.027777 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:00:44.027786 systemd-resolved[1982]: Flushed all caches. Jul 7 00:00:44.182691 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:00:44.184349 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:00:44.501689 sshd[4688]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:44.514089 systemd[1]: sshd@8-172.31.19.107:22-147.75.109.163:42614.service: Deactivated successfully. Jul 7 00:00:44.526366 containerd[2083]: time="2025-07-07T00:00:44.525975647Z" level=info msg="StopPodSandbox for \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\"" Jul 7 00:00:44.530193 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:00:44.538511 systemd-logind[2061]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:00:44.548737 systemd-logind[2061]: Removed session 9. Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:44.849 [INFO][4785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:44.851 [INFO][4785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" iface="eth0" netns="/var/run/netns/cni-ec412730-b955-94b2-4a36-04f09433a8a5" Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:44.853 [INFO][4785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" iface="eth0" netns="/var/run/netns/cni-ec412730-b955-94b2-4a36-04f09433a8a5" Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:44.854 [INFO][4785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" iface="eth0" netns="/var/run/netns/cni-ec412730-b955-94b2-4a36-04f09433a8a5" Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:44.855 [INFO][4785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:44.855 [INFO][4785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:45.327 [INFO][4815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" HandleID="k8s-pod-network.7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Workload="ip--172--31--19--107-k8s-whisker--7446b747d4--twlf4-eth0" Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:45.333 [INFO][4815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:45.335 [INFO][4815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:45.354 [WARNING][4815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" HandleID="k8s-pod-network.7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Workload="ip--172--31--19--107-k8s-whisker--7446b747d4--twlf4-eth0" Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:45.354 [INFO][4815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" HandleID="k8s-pod-network.7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Workload="ip--172--31--19--107-k8s-whisker--7446b747d4--twlf4-eth0" Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:45.358 [INFO][4815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:45.368085 containerd[2083]: 2025-07-07 00:00:45.364 [INFO][4785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:45.375491 systemd[1]: run-netns-cni\x2dec412730\x2db955\x2d94b2\x2d4a36\x2d04f09433a8a5.mount: Deactivated successfully. Jul 7 00:00:45.384463 containerd[2083]: time="2025-07-07T00:00:45.384395852Z" level=info msg="TearDown network for sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\" successfully" Jul 7 00:00:45.384463 containerd[2083]: time="2025-07-07T00:00:45.384453057Z" level=info msg="StopPodSandbox for \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\" returns successfully" Jul 7 00:00:45.529248 kubelet[3316]: I0707 00:00:45.529094 3316 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fa34b58-0d0d-481f-8d32-2a1b40537372-whisker-backend-key-pair\") pod \"4fa34b58-0d0d-481f-8d32-2a1b40537372\" (UID: \"4fa34b58-0d0d-481f-8d32-2a1b40537372\") " Jul 7 00:00:45.536089 kubelet[3316]: I0707 00:00:45.535757 3316 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4l2t\" (UniqueName: \"kubernetes.io/projected/4fa34b58-0d0d-481f-8d32-2a1b40537372-kube-api-access-x4l2t\") pod \"4fa34b58-0d0d-481f-8d32-2a1b40537372\" (UID: \"4fa34b58-0d0d-481f-8d32-2a1b40537372\") " Jul 7 00:00:45.543483 kubelet[3316]: I0707 00:00:45.542701 3316 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa34b58-0d0d-481f-8d32-2a1b40537372-whisker-ca-bundle\") pod \"4fa34b58-0d0d-481f-8d32-2a1b40537372\" (UID: \"4fa34b58-0d0d-481f-8d32-2a1b40537372\") " Jul 7 00:00:45.569691 systemd[1]: var-lib-kubelet-pods-4fa34b58\x2d0d0d\x2d481f\x2d8d32\x2d2a1b40537372-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:00:45.572812 kubelet[3316]: I0707 00:00:45.571348 3316 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa34b58-0d0d-481f-8d32-2a1b40537372-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4fa34b58-0d0d-481f-8d32-2a1b40537372" (UID: "4fa34b58-0d0d-481f-8d32-2a1b40537372"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 00:00:45.575000 kubelet[3316]: I0707 00:00:45.574947 3316 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa34b58-0d0d-481f-8d32-2a1b40537372-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4fa34b58-0d0d-481f-8d32-2a1b40537372" (UID: "4fa34b58-0d0d-481f-8d32-2a1b40537372"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:00:45.576346 kubelet[3316]: I0707 00:00:45.575837 3316 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa34b58-0d0d-481f-8d32-2a1b40537372-kube-api-access-x4l2t" (OuterVolumeSpecName: "kube-api-access-x4l2t") pod "4fa34b58-0d0d-481f-8d32-2a1b40537372" (UID: "4fa34b58-0d0d-481f-8d32-2a1b40537372"). InnerVolumeSpecName "kube-api-access-x4l2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:00:45.580026 systemd[1]: var-lib-kubelet-pods-4fa34b58\x2d0d0d\x2d481f\x2d8d32\x2d2a1b40537372-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx4l2t.mount: Deactivated successfully. Jul 7 00:00:45.644929 kubelet[3316]: I0707 00:00:45.643921 3316 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fa34b58-0d0d-481f-8d32-2a1b40537372-whisker-backend-key-pair\") on node \"ip-172-31-19-107\" DevicePath \"\"" Jul 7 00:00:45.644929 kubelet[3316]: I0707 00:00:45.643959 3316 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4l2t\" (UniqueName: \"kubernetes.io/projected/4fa34b58-0d0d-481f-8d32-2a1b40537372-kube-api-access-x4l2t\") on node \"ip-172-31-19-107\" DevicePath \"\"" Jul 7 00:00:45.644929 kubelet[3316]: I0707 00:00:45.643974 3316 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa34b58-0d0d-481f-8d32-2a1b40537372-whisker-ca-bundle\") on node \"ip-172-31-19-107\" DevicePath \"\"" Jul 7 00:00:45.669981 systemd[1]: run-containerd-runc-k8s.io-9c27421f980fe44badff996becf84f24ac2f603a8554225c00c3fbcd748fd532-runc.R1J3Kx.mount: Deactivated successfully. Jul 7 00:00:45.845302 kubelet[3316]: I0707 00:00:45.845041 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9724a68b-d002-47c2-a8bc-c4013d4ccfdd-whisker-backend-key-pair\") pod \"whisker-6c7fdbc78d-zzfkq\" (UID: \"9724a68b-d002-47c2-a8bc-c4013d4ccfdd\") " pod="calico-system/whisker-6c7fdbc78d-zzfkq" Jul 7 00:00:45.845302 kubelet[3316]: I0707 00:00:45.845212 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9724a68b-d002-47c2-a8bc-c4013d4ccfdd-whisker-ca-bundle\") pod \"whisker-6c7fdbc78d-zzfkq\" (UID: \"9724a68b-d002-47c2-a8bc-c4013d4ccfdd\") " pod="calico-system/whisker-6c7fdbc78d-zzfkq" Jul 7 00:00:45.845302 kubelet[3316]: I0707 00:00:45.845248 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zqvw\" (UniqueName: \"kubernetes.io/projected/9724a68b-d002-47c2-a8bc-c4013d4ccfdd-kube-api-access-9zqvw\") pod \"whisker-6c7fdbc78d-zzfkq\" (UID: \"9724a68b-d002-47c2-a8bc-c4013d4ccfdd\") " pod="calico-system/whisker-6c7fdbc78d-zzfkq" Jul 7 00:00:46.054809 containerd[2083]: time="2025-07-07T00:00:46.053322285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7fdbc78d-zzfkq,Uid:9724a68b-d002-47c2-a8bc-c4013d4ccfdd,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:46.086464 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:00:46.075866 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:00:46.075876 systemd-resolved[1982]: Flushed all caches. Jul 7 00:00:46.512316 (udev-worker)[4762]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:00:46.524040 systemd-networkd[1648]: cali90ed208f272: Link UP Jul 7 00:00:46.524387 systemd-networkd[1648]: cali90ed208f272: Gained carrier Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.216 [INFO][4944] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.254 [INFO][4944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0 whisker-6c7fdbc78d- calico-system 9724a68b-d002-47c2-a8bc-c4013d4ccfdd 974 0 2025-07-07 00:00:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c7fdbc78d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-19-107 whisker-6c7fdbc78d-zzfkq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali90ed208f272 [] [] }} ContainerID="9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" Namespace="calico-system" Pod="whisker-6c7fdbc78d-zzfkq" WorkloadEndpoint="ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.254 [INFO][4944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" Namespace="calico-system" Pod="whisker-6c7fdbc78d-zzfkq" WorkloadEndpoint="ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.375 [INFO][4958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" HandleID="k8s-pod-network.9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" Workload="ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.376 [INFO][4958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" HandleID="k8s-pod-network.9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" Workload="ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-107", "pod":"whisker-6c7fdbc78d-zzfkq", "timestamp":"2025-07-07 00:00:46.373875471 +0000 UTC"}, Hostname:"ip-172-31-19-107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.376 [INFO][4958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.376 [INFO][4958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.377 [INFO][4958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-107' Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.392 [INFO][4958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" host="ip-172-31-19-107" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.407 [INFO][4958] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-107" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.417 [INFO][4958] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.422 [INFO][4958] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.427 [INFO][4958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.428 [INFO][4958] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" host="ip-172-31-19-107" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.433 [INFO][4958] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84 Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.450 [INFO][4958] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" host="ip-172-31-19-107" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.466 [INFO][4958] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.65/26] block=192.168.66.64/26 handle="k8s-pod-network.9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" host="ip-172-31-19-107" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.467 [INFO][4958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.65/26] handle="k8s-pod-network.9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" host="ip-172-31-19-107" Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.469 [INFO][4958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:46.582822 containerd[2083]: 2025-07-07 00:00:46.469 [INFO][4958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.65/26] IPv6=[] ContainerID="9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" HandleID="k8s-pod-network.9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" Workload="ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0" Jul 7 00:00:46.586056 containerd[2083]: 2025-07-07 00:00:46.478 [INFO][4944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" Namespace="calico-system" Pod="whisker-6c7fdbc78d-zzfkq" WorkloadEndpoint="ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0", GenerateName:"whisker-6c7fdbc78d-", Namespace:"calico-system", SelfLink:"", UID:"9724a68b-d002-47c2-a8bc-c4013d4ccfdd", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c7fdbc78d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"", Pod:"whisker-6c7fdbc78d-zzfkq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali90ed208f272", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:46.586056 containerd[2083]: 2025-07-07 00:00:46.478 [INFO][4944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.65/32] ContainerID="9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" Namespace="calico-system" Pod="whisker-6c7fdbc78d-zzfkq" WorkloadEndpoint="ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0" Jul 7 00:00:46.586056 containerd[2083]: 2025-07-07 00:00:46.478 [INFO][4944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90ed208f272 ContainerID="9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" Namespace="calico-system" Pod="whisker-6c7fdbc78d-zzfkq" WorkloadEndpoint="ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0" Jul 7 00:00:46.586056 containerd[2083]: 2025-07-07 00:00:46.530 [INFO][4944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" Namespace="calico-system" Pod="whisker-6c7fdbc78d-zzfkq" WorkloadEndpoint="ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0" Jul 7 00:00:46.586056 containerd[2083]: 2025-07-07 00:00:46.535 [INFO][4944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" Namespace="calico-system" Pod="whisker-6c7fdbc78d-zzfkq" WorkloadEndpoint="ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0", GenerateName:"whisker-6c7fdbc78d-", Namespace:"calico-system", SelfLink:"", UID:"9724a68b-d002-47c2-a8bc-c4013d4ccfdd", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c7fdbc78d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84", Pod:"whisker-6c7fdbc78d-zzfkq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali90ed208f272", MAC:"ca:1c:a1:46:77:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:46.586056 containerd[2083]: 2025-07-07 00:00:46.560 [INFO][4944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84" Namespace="calico-system" Pod="whisker-6c7fdbc78d-zzfkq" WorkloadEndpoint="ip--172--31--19--107-k8s-whisker--6c7fdbc78d--zzfkq-eth0" Jul 7 00:00:46.665800 containerd[2083]: time="2025-07-07T00:00:46.665549095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:46.666350 containerd[2083]: time="2025-07-07T00:00:46.665798528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:46.666499 containerd[2083]: time="2025-07-07T00:00:46.666426737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:46.668269 containerd[2083]: time="2025-07-07T00:00:46.668135552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:46.759363 systemd[1]: run-containerd-runc-k8s.io-9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84-runc.fQ7SGA.mount: Deactivated successfully. Jul 7 00:00:46.860853 containerd[2083]: time="2025-07-07T00:00:46.860750030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7fdbc78d-zzfkq,Uid:9724a68b-d002-47c2-a8bc-c4013d4ccfdd,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84\"" Jul 7 00:00:46.872743 kernel: bpftool[5044]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 00:00:46.874206 containerd[2083]: time="2025-07-07T00:00:46.874161704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:00:47.104038 containerd[2083]: time="2025-07-07T00:00:47.103977107Z" level=info msg="StopPodSandbox for \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\"" Jul 7 00:00:47.146800 kubelet[3316]: I0707 00:00:47.146255 3316 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa34b58-0d0d-481f-8d32-2a1b40537372" path="/var/lib/kubelet/pods/4fa34b58-0d0d-481f-8d32-2a1b40537372/volumes" Jul 7 00:00:47.160979 containerd[2083]: time="2025-07-07T00:00:47.160213435Z" level=info msg="StopPodSandbox for \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\"" Jul 7 00:00:47.326739 (udev-worker)[4972]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:00:47.336164 systemd-networkd[1648]: vxlan.calico: Link UP Jul 7 00:00:47.336174 systemd-networkd[1648]: vxlan.calico: Gained carrier Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.290 [INFO][5062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.292 [INFO][5062] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" iface="eth0" netns="/var/run/netns/cni-9d58eedb-5627-78cf-4afa-a578c1f94139" Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.295 [INFO][5062] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" iface="eth0" netns="/var/run/netns/cni-9d58eedb-5627-78cf-4afa-a578c1f94139" Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.296 [INFO][5062] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" iface="eth0" netns="/var/run/netns/cni-9d58eedb-5627-78cf-4afa-a578c1f94139" Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.301 [INFO][5062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.301 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.489 [INFO][5094] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" HandleID="k8s-pod-network.f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.494 [INFO][5094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.495 [INFO][5094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.510 [WARNING][5094] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" HandleID="k8s-pod-network.f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.511 [INFO][5094] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" HandleID="k8s-pod-network.f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.516 [INFO][5094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:47.530974 containerd[2083]: 2025-07-07 00:00:47.525 [INFO][5062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b" Jul 7 00:00:47.531619 containerd[2083]: time="2025-07-07T00:00:47.531089711Z" level=info msg="TearDown network for sandbox \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\" successfully" Jul 7 00:00:47.531619 containerd[2083]: time="2025-07-07T00:00:47.531227609Z" level=info msg="StopPodSandbox for \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\" returns successfully" Jul 7 00:00:47.542241 containerd[2083]: time="2025-07-07T00:00:47.542190336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674b869996-75pq4,Uid:c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:00:47.544274 systemd[1]: run-netns-cni\x2d9d58eedb\x2d5627\x2d78cf\x2d4afa\x2da578c1f94139.mount: Deactivated successfully. Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.340 [WARNING][5083] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" WorkloadEndpoint="ip--172--31--19--107-k8s-whisker--7446b747d4--twlf4-eth0" Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.340 [INFO][5083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.340 [INFO][5083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" iface="eth0" netns="" Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.340 [INFO][5083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.340 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.602 [INFO][5105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" HandleID="k8s-pod-network.7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Workload="ip--172--31--19--107-k8s-whisker--7446b747d4--twlf4-eth0" Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.603 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.603 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.617 [WARNING][5105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" HandleID="k8s-pod-network.7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Workload="ip--172--31--19--107-k8s-whisker--7446b747d4--twlf4-eth0" Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.617 [INFO][5105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" HandleID="k8s-pod-network.7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Workload="ip--172--31--19--107-k8s-whisker--7446b747d4--twlf4-eth0" Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.621 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:47.629711 containerd[2083]: 2025-07-07 00:00:47.624 [INFO][5083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:47.629711 containerd[2083]: time="2025-07-07T00:00:47.628017249Z" level=info msg="TearDown network for sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\" successfully" Jul 7 00:00:47.629711 containerd[2083]: time="2025-07-07T00:00:47.628054078Z" level=info msg="StopPodSandbox for \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\" returns successfully" Jul 7 00:00:47.629711 containerd[2083]: time="2025-07-07T00:00:47.629020737Z" level=info msg="RemovePodSandbox for \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\"" Jul 7 00:00:47.629711 containerd[2083]: time="2025-07-07T00:00:47.629071555Z" level=info msg="Forcibly stopping sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\"" Jul 7 00:00:47.867813 systemd-networkd[1648]: cali90ed208f272: Gained IPv6LL Jul 7 00:00:47.953632 (udev-worker)[5120]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:00:47.957136 systemd-networkd[1648]: cali1a6bbed993c: Link UP Jul 7 00:00:47.963728 systemd-networkd[1648]: cali1a6bbed993c: Gained carrier Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.775 [WARNING][5141] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" WorkloadEndpoint="ip--172--31--19--107-k8s-whisker--7446b747d4--twlf4-eth0" Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.775 [INFO][5141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.776 [INFO][5141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" iface="eth0" netns="" Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.776 [INFO][5141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.776 [INFO][5141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.849 [INFO][5163] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" HandleID="k8s-pod-network.7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Workload="ip--172--31--19--107-k8s-whisker--7446b747d4--twlf4-eth0" Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.850 [INFO][5163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.943 [INFO][5163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.958 [WARNING][5163] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" HandleID="k8s-pod-network.7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Workload="ip--172--31--19--107-k8s-whisker--7446b747d4--twlf4-eth0" Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.959 [INFO][5163] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" HandleID="k8s-pod-network.7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Workload="ip--172--31--19--107-k8s-whisker--7446b747d4--twlf4-eth0" Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.968 [INFO][5163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:47.993714 containerd[2083]: 2025-07-07 00:00:47.981 [INFO][5141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729" Jul 7 00:00:47.993714 containerd[2083]: time="2025-07-07T00:00:47.992765753Z" level=info msg="TearDown network for sandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\" successfully" Jul 7 00:00:48.004287 containerd[2083]: time="2025-07-07T00:00:48.004230144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:00:48.004556 containerd[2083]: time="2025-07-07T00:00:48.004327787Z" level=info msg="RemovePodSandbox \"7fe907f083013fcc183933378254305b23554dd8e547a7233c97a5090082d729\" returns successfully" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.754 [INFO][5125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0 calico-apiserver-674b869996- calico-apiserver c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb 986 0 2025-07-07 00:00:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:674b869996 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-107 calico-apiserver-674b869996-75pq4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1a6bbed993c [] [] }} ContainerID="f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-75pq4" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.754 [INFO][5125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-75pq4" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.826 [INFO][5157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" HandleID="k8s-pod-network.f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.828 [INFO][5157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" HandleID="k8s-pod-network.f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-107", "pod":"calico-apiserver-674b869996-75pq4", "timestamp":"2025-07-07 00:00:47.82462182 +0000 UTC"}, Hostname:"ip-172-31-19-107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.828 [INFO][5157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.829 [INFO][5157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.829 [INFO][5157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-107' Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.849 [INFO][5157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" host="ip-172-31-19-107" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.861 [INFO][5157] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-107" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.888 [INFO][5157] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.898 [INFO][5157] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.911 [INFO][5157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.911 [INFO][5157] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" host="ip-172-31-19-107" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.922 [INFO][5157] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.932 [INFO][5157] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" host="ip-172-31-19-107" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.942 [INFO][5157] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.66/26] block=192.168.66.64/26 handle="k8s-pod-network.f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" host="ip-172-31-19-107" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.943 [INFO][5157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.66/26] handle="k8s-pod-network.f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" host="ip-172-31-19-107" Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.943 [INFO][5157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:48.011072 containerd[2083]: 2025-07-07 00:00:47.943 [INFO][5157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.66/26] IPv6=[] ContainerID="f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" HandleID="k8s-pod-network.f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" Jul 7 00:00:48.012050 containerd[2083]: 2025-07-07 00:00:47.947 [INFO][5125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-75pq4" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0", GenerateName:"calico-apiserver-674b869996-", Namespace:"calico-apiserver", SelfLink:"", UID:"c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674b869996", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"", Pod:"calico-apiserver-674b869996-75pq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a6bbed993c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:48.012050 containerd[2083]: 2025-07-07 00:00:47.947 [INFO][5125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.66/32] ContainerID="f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-75pq4" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" Jul 7 00:00:48.012050 containerd[2083]: 2025-07-07 00:00:47.949 [INFO][5125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a6bbed993c ContainerID="f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-75pq4" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" Jul 7 00:00:48.012050 containerd[2083]: 2025-07-07 00:00:47.968 [INFO][5125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-75pq4" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" Jul 7 00:00:48.012050 containerd[2083]: 2025-07-07 00:00:47.970 [INFO][5125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-75pq4" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0", GenerateName:"calico-apiserver-674b869996-", Namespace:"calico-apiserver", SelfLink:"", UID:"c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674b869996", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f", Pod:"calico-apiserver-674b869996-75pq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a6bbed993c", MAC:"6e:79:86:d5:a2:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:48.012050 containerd[2083]: 2025-07-07 00:00:48.002 [INFO][5125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-75pq4" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--75pq4-eth0" Jul 7 00:00:48.042022 containerd[2083]: time="2025-07-07T00:00:48.040200034Z" level=info msg="StopPodSandbox for \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\"" Jul 7 00:00:48.042022 containerd[2083]: time="2025-07-07T00:00:48.040695705Z" level=info msg="StopPodSandbox for \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\"" Jul 7 00:00:48.052916 containerd[2083]: time="2025-07-07T00:00:48.052313498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:48.052916 containerd[2083]: time="2025-07-07T00:00:48.052393555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:48.052916 containerd[2083]: time="2025-07-07T00:00:48.052418269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:48.052916 containerd[2083]: time="2025-07-07T00:00:48.052546681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:48.113127 systemd[1]: run-containerd-runc-k8s.io-f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f-runc.64m0wD.mount: Deactivated successfully. Jul 7 00:00:48.242411 containerd[2083]: time="2025-07-07T00:00:48.242349969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674b869996-75pq4,Uid:c34bf4f5-bb6b-420d-9d8c-1e1dc634bceb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f\"" Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.206 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.206 [INFO][5249] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" iface="eth0" netns="/var/run/netns/cni-17d699b4-8ae5-6134-b8da-b168b69708a6" Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.207 [INFO][5249] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" iface="eth0" netns="/var/run/netns/cni-17d699b4-8ae5-6134-b8da-b168b69708a6" Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.209 [INFO][5249] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" iface="eth0" netns="/var/run/netns/cni-17d699b4-8ae5-6134-b8da-b168b69708a6" Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.209 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.210 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.274 [INFO][5278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" HandleID="k8s-pod-network.1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.275 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.276 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.293 [WARNING][5278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" HandleID="k8s-pod-network.1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.293 [INFO][5278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" HandleID="k8s-pod-network.1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.299 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:48.315358 containerd[2083]: 2025-07-07 00:00:48.310 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:00:48.318134 containerd[2083]: time="2025-07-07T00:00:48.315331172Z" level=info msg="TearDown network for sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\" successfully" Jul 7 00:00:48.318134 containerd[2083]: time="2025-07-07T00:00:48.315551745Z" level=info msg="StopPodSandbox for \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\" returns successfully" Jul 7 00:00:48.318134 containerd[2083]: time="2025-07-07T00:00:48.317470693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-xq9q9,Uid:561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe,Namespace:calico-system,Attempt:1,}" Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.214 [INFO][5250] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.215 [INFO][5250] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" iface="eth0" netns="/var/run/netns/cni-853bea00-a678-8c28-73ba-6777d30cb3b7" Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.215 [INFO][5250] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" iface="eth0" netns="/var/run/netns/cni-853bea00-a678-8c28-73ba-6777d30cb3b7" Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.216 [INFO][5250] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" iface="eth0" netns="/var/run/netns/cni-853bea00-a678-8c28-73ba-6777d30cb3b7" Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.216 [INFO][5250] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.216 [INFO][5250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.299 [INFO][5281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" HandleID="k8s-pod-network.2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" Workload="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.300 [INFO][5281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.300 [INFO][5281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.312 [WARNING][5281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" HandleID="k8s-pod-network.2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" Workload="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.313 [INFO][5281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" HandleID="k8s-pod-network.2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" Workload="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.315 [INFO][5281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:48.327628 containerd[2083]: 2025-07-07 00:00:48.319 [INFO][5250] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283" Jul 7 00:00:48.328530 containerd[2083]: time="2025-07-07T00:00:48.328484127Z" level=info msg="TearDown network for sandbox \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\" successfully" Jul 7 00:00:48.328530 containerd[2083]: time="2025-07-07T00:00:48.328530905Z" level=info msg="StopPodSandbox for \"2946e45277cf90d220237b5f463466853bc1a4c61bb560843d98b55977134283\" returns successfully" Jul 7 00:00:48.329847 containerd[2083]: time="2025-07-07T00:00:48.329558920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d699df5cb-rvx8c,Uid:3256e36e-3dfd-4340-92ad-002ae5ad9541,Namespace:calico-system,Attempt:1,}" Jul 7 00:00:48.444138 systemd-networkd[1648]: vxlan.calico: Gained IPv6LL Jul 7 00:00:48.545484 systemd[1]: run-netns-cni\x2d17d699b4\x2d8ae5\x2d6134\x2db8da\x2db168b69708a6.mount: Deactivated successfully. Jul 7 00:00:48.546421 systemd[1]: run-netns-cni\x2d853bea00\x2da678\x2d8c28\x2d73ba\x2d6777d30cb3b7.mount: Deactivated successfully. Jul 7 00:00:48.578796 systemd-networkd[1648]: cali8d281e2582f: Link UP Jul 7 00:00:48.585241 systemd-networkd[1648]: cali8d281e2582f: Gained carrier Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.419 [INFO][5301] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0 goldmane-58fd7646b9- calico-system 561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe 995 0 2025-07-07 00:00:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-19-107 goldmane-58fd7646b9-xq9q9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8d281e2582f [] [] }} ContainerID="5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-xq9q9" WorkloadEndpoint="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.420 [INFO][5301] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-xq9q9" WorkloadEndpoint="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.490 [INFO][5324] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" HandleID="k8s-pod-network.5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.493 [INFO][5324] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" HandleID="k8s-pod-network.5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f900), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-107", "pod":"goldmane-58fd7646b9-xq9q9", "timestamp":"2025-07-07 00:00:48.490588573 +0000 UTC"}, Hostname:"ip-172-31-19-107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.493 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.493 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.493 [INFO][5324] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-107' Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.504 [INFO][5324] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" host="ip-172-31-19-107" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.511 [INFO][5324] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-107" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.522 [INFO][5324] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.525 [INFO][5324] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.528 [INFO][5324] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.528 [INFO][5324] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" host="ip-172-31-19-107" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.531 [INFO][5324] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3 Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.548 [INFO][5324] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" host="ip-172-31-19-107" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.564 [INFO][5324] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.67/26] block=192.168.66.64/26 handle="k8s-pod-network.5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" host="ip-172-31-19-107" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.564 [INFO][5324] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.67/26] handle="k8s-pod-network.5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" host="ip-172-31-19-107" Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.565 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:48.634886 containerd[2083]: 2025-07-07 00:00:48.566 [INFO][5324] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.67/26] IPv6=[] ContainerID="5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" HandleID="k8s-pod-network.5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:00:48.638699 containerd[2083]: 2025-07-07 00:00:48.570 [INFO][5301] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-xq9q9" WorkloadEndpoint="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"", Pod:"goldmane-58fd7646b9-xq9q9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8d281e2582f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:48.638699 containerd[2083]: 2025-07-07 00:00:48.570 [INFO][5301] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.67/32] ContainerID="5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-xq9q9" WorkloadEndpoint="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:00:48.638699 containerd[2083]: 2025-07-07 00:00:48.570 [INFO][5301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d281e2582f ContainerID="5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-xq9q9" WorkloadEndpoint="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:00:48.638699 containerd[2083]: 2025-07-07 00:00:48.592 [INFO][5301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-xq9q9" WorkloadEndpoint="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:00:48.638699 containerd[2083]: 2025-07-07 00:00:48.594 [INFO][5301] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-xq9q9" WorkloadEndpoint="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3", Pod:"goldmane-58fd7646b9-xq9q9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8d281e2582f", MAC:"ba:3d:c8:99:ed:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:48.638699 containerd[2083]: 2025-07-07 00:00:48.625 [INFO][5301] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-xq9q9" WorkloadEndpoint="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:00:48.693409 containerd[2083]: time="2025-07-07T00:00:48.692323821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:48.693409 containerd[2083]: time="2025-07-07T00:00:48.692431498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:48.693409 containerd[2083]: time="2025-07-07T00:00:48.692454484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:48.693409 containerd[2083]: time="2025-07-07T00:00:48.692600857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:48.729536 systemd-networkd[1648]: cali96eb455d75e: Link UP Jul 7 00:00:48.731994 systemd-networkd[1648]: cali96eb455d75e: Gained carrier Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.440 [INFO][5311] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0 calico-kube-controllers-d699df5cb- calico-system 3256e36e-3dfd-4340-92ad-002ae5ad9541 996 0 2025-07-07 00:00:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d699df5cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-107 calico-kube-controllers-d699df5cb-rvx8c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali96eb455d75e [] [] }} ContainerID="a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" Namespace="calico-system" Pod="calico-kube-controllers-d699df5cb-rvx8c" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.440 [INFO][5311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" Namespace="calico-system" Pod="calico-kube-controllers-d699df5cb-rvx8c" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.506 [INFO][5331] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" HandleID="k8s-pod-network.a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" Workload="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.507 [INFO][5331] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" HandleID="k8s-pod-network.a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" Workload="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-107", "pod":"calico-kube-controllers-d699df5cb-rvx8c", "timestamp":"2025-07-07 00:00:48.506391618 +0000 UTC"}, Hostname:"ip-172-31-19-107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.507 [INFO][5331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.566 [INFO][5331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.567 [INFO][5331] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-107' Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.604 [INFO][5331] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" host="ip-172-31-19-107" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.641 [INFO][5331] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-107" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.659 [INFO][5331] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.665 [INFO][5331] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.670 [INFO][5331] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.670 [INFO][5331] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" host="ip-172-31-19-107" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.676 [INFO][5331] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.688 [INFO][5331] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" host="ip-172-31-19-107" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.710 [INFO][5331] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.68/26] block=192.168.66.64/26 handle="k8s-pod-network.a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" host="ip-172-31-19-107" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.711 [INFO][5331] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.68/26] handle="k8s-pod-network.a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" host="ip-172-31-19-107" Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.712 [INFO][5331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:48.769294 containerd[2083]: 2025-07-07 00:00:48.713 [INFO][5331] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.68/26] IPv6=[] ContainerID="a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" HandleID="k8s-pod-network.a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" Workload="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" Jul 7 00:00:48.772352 containerd[2083]: 2025-07-07 00:00:48.722 [INFO][5311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" Namespace="calico-system" Pod="calico-kube-controllers-d699df5cb-rvx8c" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0", GenerateName:"calico-kube-controllers-d699df5cb-", Namespace:"calico-system", SelfLink:"", UID:"3256e36e-3dfd-4340-92ad-002ae5ad9541", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d699df5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"", Pod:"calico-kube-controllers-d699df5cb-rvx8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96eb455d75e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:48.772352 containerd[2083]: 2025-07-07 00:00:48.722 [INFO][5311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.68/32] ContainerID="a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" Namespace="calico-system" Pod="calico-kube-controllers-d699df5cb-rvx8c" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" Jul 7 00:00:48.772352 containerd[2083]: 2025-07-07 00:00:48.722 [INFO][5311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96eb455d75e ContainerID="a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" Namespace="calico-system" Pod="calico-kube-controllers-d699df5cb-rvx8c" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" Jul 7 00:00:48.772352 containerd[2083]: 2025-07-07 00:00:48.734 [INFO][5311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" Namespace="calico-system" Pod="calico-kube-controllers-d699df5cb-rvx8c" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" Jul 7 00:00:48.772352 containerd[2083]: 2025-07-07 00:00:48.734 [INFO][5311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" Namespace="calico-system" Pod="calico-kube-controllers-d699df5cb-rvx8c" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0", GenerateName:"calico-kube-controllers-d699df5cb-", Namespace:"calico-system", SelfLink:"", UID:"3256e36e-3dfd-4340-92ad-002ae5ad9541", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d699df5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad", Pod:"calico-kube-controllers-d699df5cb-rvx8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96eb455d75e", MAC:"22:a3:a5:70:a7:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:48.772352 containerd[2083]: 2025-07-07 00:00:48.763 [INFO][5311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad" Namespace="calico-system" Pod="calico-kube-controllers-d699df5cb-rvx8c" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--kube--controllers--d699df5cb--rvx8c-eth0" Jul 7 00:00:48.839089 containerd[2083]: time="2025-07-07T00:00:48.837627001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:48.840012 containerd[2083]: time="2025-07-07T00:00:48.839712526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:48.840012 containerd[2083]: time="2025-07-07T00:00:48.839754937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:48.840012 containerd[2083]: time="2025-07-07T00:00:48.839894770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:48.885312 containerd[2083]: time="2025-07-07T00:00:48.885266433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-xq9q9,Uid:561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe,Namespace:calico-system,Attempt:1,} returns sandbox id \"5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3\"" Jul 7 00:00:48.933307 containerd[2083]: time="2025-07-07T00:00:48.933269406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d699df5cb-rvx8c,Uid:3256e36e-3dfd-4340-92ad-002ae5ad9541,Namespace:calico-system,Attempt:1,} returns sandbox id \"a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad\"" Jul 7 00:00:49.040694 containerd[2083]: time="2025-07-07T00:00:49.040635360Z" level=info msg="StopPodSandbox for \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\"" Jul 7 00:00:49.043838 containerd[2083]: time="2025-07-07T00:00:49.041463001Z" level=info msg="StopPodSandbox for \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\"" Jul 7 00:00:49.047413 containerd[2083]: time="2025-07-07T00:00:49.044580483Z" level=info msg="StopPodSandbox for \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\"" Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.187 [INFO][5471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.188 [INFO][5471] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" iface="eth0" netns="/var/run/netns/cni-7fa70222-0475-482c-b218-1fdf28ee5c4b" Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.190 [INFO][5471] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" iface="eth0" netns="/var/run/netns/cni-7fa70222-0475-482c-b218-1fdf28ee5c4b" Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.190 [INFO][5471] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" iface="eth0" netns="/var/run/netns/cni-7fa70222-0475-482c-b218-1fdf28ee5c4b" Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.190 [INFO][5471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.190 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.255 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" HandleID="k8s-pod-network.80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.255 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.255 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.272 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" HandleID="k8s-pod-network.80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.272 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" HandleID="k8s-pod-network.80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.274 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:49.298424 containerd[2083]: 2025-07-07 00:00:49.285 [INFO][5471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:00:49.298424 containerd[2083]: time="2025-07-07T00:00:49.298355707Z" level=info msg="TearDown network for sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\" successfully" Jul 7 00:00:49.298424 containerd[2083]: time="2025-07-07T00:00:49.298392290Z" level=info msg="StopPodSandbox for \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\" returns successfully" Jul 7 00:00:49.301422 containerd[2083]: time="2025-07-07T00:00:49.299524119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmlwg,Uid:fd3bd012-86e5-4807-95d5-ad6901284597,Namespace:calico-system,Attempt:1,}" Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.172 [INFO][5470] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.173 [INFO][5470] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" iface="eth0" netns="/var/run/netns/cni-96463f7e-121d-af52-e88b-1ade41fefe08" Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.173 [INFO][5470] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" iface="eth0" netns="/var/run/netns/cni-96463f7e-121d-af52-e88b-1ade41fefe08" Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.178 [INFO][5470] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" iface="eth0" netns="/var/run/netns/cni-96463f7e-121d-af52-e88b-1ade41fefe08" Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.178 [INFO][5470] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.178 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.266 [INFO][5490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" HandleID="k8s-pod-network.8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.267 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.280 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.301 [WARNING][5490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" HandleID="k8s-pod-network.8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.301 [INFO][5490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" HandleID="k8s-pod-network.8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.305 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:49.322207 containerd[2083]: 2025-07-07 00:00:49.312 [INFO][5470] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559" Jul 7 00:00:49.322207 containerd[2083]: time="2025-07-07T00:00:49.320171999Z" level=info msg="TearDown network for sandbox \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\" successfully" Jul 7 00:00:49.322207 containerd[2083]: time="2025-07-07T00:00:49.320221379Z" level=info msg="StopPodSandbox for \"8858dd71f9256dbbe403acf740a9b2a6172f39804d7817804767375fb5ed5559\" returns successfully" Jul 7 00:00:49.341898 containerd[2083]: time="2025-07-07T00:00:49.341236474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p6qwp,Uid:3bf78fb3-72f6-471c-b914-66a504f5315e,Namespace:kube-system,Attempt:1,}" Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.214 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.216 [INFO][5472] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" iface="eth0" netns="/var/run/netns/cni-79066602-1754-27c5-c2f3-4e26bd375a23" Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.217 [INFO][5472] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" iface="eth0" netns="/var/run/netns/cni-79066602-1754-27c5-c2f3-4e26bd375a23" Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.217 [INFO][5472] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" iface="eth0" netns="/var/run/netns/cni-79066602-1754-27c5-c2f3-4e26bd375a23" Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.217 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.217 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.303 [INFO][5500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" HandleID="k8s-pod-network.8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.304 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.305 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.318 [WARNING][5500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" HandleID="k8s-pod-network.8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.318 [INFO][5500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" HandleID="k8s-pod-network.8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.321 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:49.357784 containerd[2083]: 2025-07-07 00:00:49.354 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:00:49.358461 containerd[2083]: time="2025-07-07T00:00:49.357953589Z" level=info msg="TearDown network for sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\" successfully" Jul 7 00:00:49.358461 containerd[2083]: time="2025-07-07T00:00:49.357994086Z" level=info msg="StopPodSandbox for \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\" returns successfully" Jul 7 00:00:49.358911 containerd[2083]: time="2025-07-07T00:00:49.358866869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xlnl6,Uid:2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e,Namespace:kube-system,Attempt:1,}" Jul 7 00:00:49.533958 systemd-networkd[1648]: cali1a6bbed993c: Gained IPv6LL Jul 7 00:00:49.544130 systemd[1]: Started sshd@9-172.31.19.107:22-147.75.109.163:37458.service - OpenSSH per-connection server daemon (147.75.109.163:37458). Jul 7 00:00:49.565158 systemd[1]: run-netns-cni\x2d7fa70222\x2d0475\x2d482c\x2db218\x2d1fdf28ee5c4b.mount: Deactivated successfully. Jul 7 00:00:49.566187 systemd[1]: run-netns-cni\x2d96463f7e\x2d121d\x2daf52\x2de88b\x2d1ade41fefe08.mount: Deactivated successfully. Jul 7 00:00:49.566816 systemd[1]: run-netns-cni\x2d79066602\x2d1754\x2d27c5\x2dc2f3\x2d4e26bd375a23.mount: Deactivated successfully. Jul 7 00:00:49.801065 systemd-networkd[1648]: calic2f301a877d: Link UP Jul 7 00:00:49.803519 systemd-networkd[1648]: calic2f301a877d: Gained carrier Jul 7 00:00:49.833958 sshd[5560]: Accepted publickey for core from 147.75.109.163 port 37458 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:49.854617 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.444 [INFO][5510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0 csi-node-driver- calico-system fd3bd012-86e5-4807-95d5-ad6901284597 1015 0 2025-07-07 00:00:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-107 csi-node-driver-vmlwg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic2f301a877d [] [] }} ContainerID="4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" Namespace="calico-system" Pod="csi-node-driver-vmlwg" WorkloadEndpoint="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.444 [INFO][5510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" Namespace="calico-system" Pod="csi-node-driver-vmlwg" WorkloadEndpoint="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.653 [INFO][5550] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" HandleID="k8s-pod-network.4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.655 [INFO][5550] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" HandleID="k8s-pod-network.4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122370), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-107", "pod":"csi-node-driver-vmlwg", "timestamp":"2025-07-07 00:00:49.65202124 +0000 UTC"}, Hostname:"ip-172-31-19-107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.655 [INFO][5550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.655 [INFO][5550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.655 [INFO][5550] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-107' Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.676 [INFO][5550] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" host="ip-172-31-19-107" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.691 [INFO][5550] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-107" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.703 [INFO][5550] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.708 [INFO][5550] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.712 [INFO][5550] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.713 [INFO][5550] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" host="ip-172-31-19-107" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.717 [INFO][5550] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.732 [INFO][5550] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" host="ip-172-31-19-107" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.752 [INFO][5550] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.69/26] block=192.168.66.64/26 handle="k8s-pod-network.4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" host="ip-172-31-19-107" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.752 [INFO][5550] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.69/26] handle="k8s-pod-network.4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" host="ip-172-31-19-107" Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.754 [INFO][5550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:49.861586 containerd[2083]: 2025-07-07 00:00:49.754 [INFO][5550] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.69/26] IPv6=[] ContainerID="4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" HandleID="k8s-pod-network.4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:00:49.862830 containerd[2083]: 2025-07-07 00:00:49.778 [INFO][5510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" Namespace="calico-system" Pod="csi-node-driver-vmlwg" WorkloadEndpoint="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd3bd012-86e5-4807-95d5-ad6901284597", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"", Pod:"csi-node-driver-vmlwg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2f301a877d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:49.862830 containerd[2083]: 2025-07-07 00:00:49.784 [INFO][5510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.69/32] ContainerID="4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" Namespace="calico-system" Pod="csi-node-driver-vmlwg" WorkloadEndpoint="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:00:49.862830 containerd[2083]: 2025-07-07 00:00:49.784 [INFO][5510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2f301a877d ContainerID="4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" Namespace="calico-system" Pod="csi-node-driver-vmlwg" WorkloadEndpoint="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:00:49.862830 containerd[2083]: 2025-07-07 00:00:49.805 [INFO][5510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" Namespace="calico-system" Pod="csi-node-driver-vmlwg" WorkloadEndpoint="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:00:49.862830 containerd[2083]: 2025-07-07 00:00:49.806 [INFO][5510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" Namespace="calico-system" Pod="csi-node-driver-vmlwg" WorkloadEndpoint="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd3bd012-86e5-4807-95d5-ad6901284597", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc", Pod:"csi-node-driver-vmlwg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2f301a877d", MAC:"ee:65:be:9f:fc:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:49.862830 containerd[2083]: 2025-07-07 00:00:49.846 [INFO][5510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc" Namespace="calico-system" Pod="csi-node-driver-vmlwg" WorkloadEndpoint="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:00:49.881951 systemd-logind[2061]: New session 10 of user core. Jul 7 00:00:49.891429 containerd[2083]: time="2025-07-07T00:00:49.887870973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:49.896899 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:00:49.903708 containerd[2083]: time="2025-07-07T00:00:49.902337137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:00:49.918250 containerd[2083]: time="2025-07-07T00:00:49.917110863Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:49.946697 containerd[2083]: time="2025-07-07T00:00:49.945157638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:49.949965 containerd[2083]: time="2025-07-07T00:00:49.949824813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 3.075604225s" Jul 7 00:00:49.954553 containerd[2083]: time="2025-07-07T00:00:49.952819077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:00:49.970382 containerd[2083]: time="2025-07-07T00:00:49.970331463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:00:49.970931 containerd[2083]: time="2025-07-07T00:00:49.970355055Z" level=info msg="CreateContainer within sandbox \"9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:00:49.990205 containerd[2083]: time="2025-07-07T00:00:49.989720797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:49.990626 containerd[2083]: time="2025-07-07T00:00:49.990321516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:49.990626 containerd[2083]: time="2025-07-07T00:00:49.990353364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:50.017275 systemd-networkd[1648]: cali55695e46569: Link UP Jul 7 00:00:50.022922 systemd-networkd[1648]: cali55695e46569: Gained carrier Jul 7 00:00:50.061883 containerd[2083]: time="2025-07-07T00:00:50.003329394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:50.061883 containerd[2083]: time="2025-07-07T00:00:50.055073362Z" level=info msg="StopPodSandbox for \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\"" Jul 7 00:00:50.097788 containerd[2083]: time="2025-07-07T00:00:50.096224998Z" level=info msg="CreateContainer within sandbox \"9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"05796f862589c0dfe06f177edac5e64da952d1dc654251acda5c7326bd807fb1\"" Jul 7 00:00:50.132947 systemd-networkd[1648]: cali17bcee60d44: Link UP Jul 7 00:00:50.139583 systemd[1]: run-containerd-runc-k8s.io-4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc-runc.CY6lvN.mount: Deactivated successfully. Jul 7 00:00:50.140820 systemd-networkd[1648]: cali17bcee60d44: Gained carrier Jul 7 00:00:50.163591 containerd[2083]: time="2025-07-07T00:00:50.163539057Z" level=info msg="StartContainer for \"05796f862589c0dfe06f177edac5e64da952d1dc654251acda5c7326bd807fb1\"" Jul 7 00:00:50.177914 systemd-networkd[1648]: cali8d281e2582f: Gained IPv6LL Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.581 [INFO][5539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0 coredns-7c65d6cfc9- kube-system 2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e 1016 0 2025-07-06 23:59:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-107 coredns-7c65d6cfc9-xlnl6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali55695e46569 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xlnl6" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.584 [INFO][5539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xlnl6" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.721 [INFO][5565] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" HandleID="k8s-pod-network.b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.721 [INFO][5565] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" HandleID="k8s-pod-network.b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f700), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-107", "pod":"coredns-7c65d6cfc9-xlnl6", "timestamp":"2025-07-07 00:00:49.721156049 +0000 UTC"}, Hostname:"ip-172-31-19-107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.722 [INFO][5565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.754 [INFO][5565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.754 [INFO][5565] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-107' Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.797 [INFO][5565] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" host="ip-172-31-19-107" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.845 [INFO][5565] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-107" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.861 [INFO][5565] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.879 [INFO][5565] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.907 [INFO][5565] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.907 [INFO][5565] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" host="ip-172-31-19-107" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.914 [INFO][5565] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.939 [INFO][5565] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" host="ip-172-31-19-107" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.955 [INFO][5565] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.70/26] block=192.168.66.64/26 handle="k8s-pod-network.b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" host="ip-172-31-19-107" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.955 [INFO][5565] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.70/26] handle="k8s-pod-network.b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" host="ip-172-31-19-107" Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.957 [INFO][5565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:50.186874 containerd[2083]: 2025-07-07 00:00:49.957 [INFO][5565] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.70/26] IPv6=[] ContainerID="b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" HandleID="k8s-pod-network.b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:00:50.190574 containerd[2083]: 2025-07-07 00:00:49.986 [INFO][5539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xlnl6" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"", Pod:"coredns-7c65d6cfc9-xlnl6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55695e46569", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:50.190574 containerd[2083]: 2025-07-07 00:00:49.987 [INFO][5539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.70/32] ContainerID="b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xlnl6" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:00:50.190574 containerd[2083]: 2025-07-07 00:00:49.988 [INFO][5539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55695e46569 ContainerID="b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xlnl6" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:00:50.190574 containerd[2083]: 2025-07-07 00:00:50.030 [INFO][5539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xlnl6" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:00:50.190574 containerd[2083]: 2025-07-07 00:00:50.037 [INFO][5539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xlnl6" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd", Pod:"coredns-7c65d6cfc9-xlnl6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55695e46569", MAC:"ca:a1:ad:06:bb:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:50.190574 containerd[2083]: 2025-07-07 00:00:50.094 [INFO][5539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xlnl6" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:00:50.236432 systemd-networkd[1648]: cali96eb455d75e: Gained IPv6LL Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:49.497 [INFO][5526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0 coredns-7c65d6cfc9- kube-system 3bf78fb3-72f6-471c-b914-66a504f5315e 1014 0 2025-07-06 23:59:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-107 coredns-7c65d6cfc9-p6qwp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali17bcee60d44 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p6qwp" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:49.498 [INFO][5526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p6qwp" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:49.733 [INFO][5558] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" HandleID="k8s-pod-network.0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:49.734 [INFO][5558] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" HandleID="k8s-pod-network.0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00062e010), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-107", "pod":"coredns-7c65d6cfc9-p6qwp", "timestamp":"2025-07-07 00:00:49.733621006 +0000 UTC"}, Hostname:"ip-172-31-19-107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:49.734 [INFO][5558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:49.957 [INFO][5558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:49.957 [INFO][5558] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-107' Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:49.971 [INFO][5558] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" host="ip-172-31-19-107" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:49.982 [INFO][5558] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-107" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:49.995 [INFO][5558] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:49.999 [INFO][5558] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:50.005 [INFO][5558] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:50.005 [INFO][5558] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" host="ip-172-31-19-107" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:50.047 [INFO][5558] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46 Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:50.071 [INFO][5558] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" host="ip-172-31-19-107" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:50.104 [INFO][5558] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.71/26] block=192.168.66.64/26 handle="k8s-pod-network.0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" host="ip-172-31-19-107" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:50.105 [INFO][5558] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.71/26] handle="k8s-pod-network.0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" host="ip-172-31-19-107" Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:50.105 [INFO][5558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:50.243505 containerd[2083]: 2025-07-07 00:00:50.105 [INFO][5558] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.71/26] IPv6=[] ContainerID="0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" HandleID="k8s-pod-network.0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" Jul 7 00:00:50.245205 containerd[2083]: 2025-07-07 00:00:50.115 [INFO][5526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p6qwp" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3bf78fb3-72f6-471c-b914-66a504f5315e", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"", Pod:"coredns-7c65d6cfc9-p6qwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17bcee60d44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:50.245205 containerd[2083]: 2025-07-07 00:00:50.116 [INFO][5526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.71/32] ContainerID="0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p6qwp" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" Jul 7 00:00:50.245205 containerd[2083]: 2025-07-07 00:00:50.116 [INFO][5526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17bcee60d44 ContainerID="0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p6qwp" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" Jul 7 00:00:50.245205 containerd[2083]: 2025-07-07 00:00:50.166 [INFO][5526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p6qwp" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" Jul 7 00:00:50.245205 containerd[2083]: 2025-07-07 00:00:50.171 [INFO][5526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p6qwp" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3bf78fb3-72f6-471c-b914-66a504f5315e", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46", Pod:"coredns-7c65d6cfc9-p6qwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17bcee60d44", MAC:"0a:5b:7e:bf:ea:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:50.245205 containerd[2083]: 2025-07-07 00:00:50.216 [INFO][5526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p6qwp" WorkloadEndpoint="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--p6qwp-eth0" Jul 7 00:00:50.330371 containerd[2083]: time="2025-07-07T00:00:50.330214257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmlwg,Uid:fd3bd012-86e5-4807-95d5-ad6901284597,Namespace:calico-system,Attempt:1,} returns sandbox id \"4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc\"" Jul 7 00:00:50.337338 containerd[2083]: time="2025-07-07T00:00:50.334255628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:50.337338 containerd[2083]: time="2025-07-07T00:00:50.334444057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:50.337338 containerd[2083]: time="2025-07-07T00:00:50.334496210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:50.337338 containerd[2083]: time="2025-07-07T00:00:50.334946308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:50.355099 containerd[2083]: time="2025-07-07T00:00:50.353631893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:50.357228 containerd[2083]: time="2025-07-07T00:00:50.356862670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:50.358387 containerd[2083]: time="2025-07-07T00:00:50.356910917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:50.362590 containerd[2083]: time="2025-07-07T00:00:50.361780241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:50.662779 containerd[2083]: time="2025-07-07T00:00:50.661374314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p6qwp,Uid:3bf78fb3-72f6-471c-b914-66a504f5315e,Namespace:kube-system,Attempt:1,} returns sandbox id \"0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46\"" Jul 7 00:00:50.683737 containerd[2083]: time="2025-07-07T00:00:50.683409229Z" level=info msg="CreateContainer within sandbox \"0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:00:50.718426 containerd[2083]: time="2025-07-07T00:00:50.717426397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xlnl6,Uid:2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e,Namespace:kube-system,Attempt:1,} returns sandbox id \"b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd\"" Jul 7 00:00:50.726135 containerd[2083]: time="2025-07-07T00:00:50.726080817Z" level=info msg="CreateContainer within sandbox \"b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:00:50.752088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838595372.mount: Deactivated successfully. Jul 7 00:00:50.758484 containerd[2083]: time="2025-07-07T00:00:50.758430365Z" level=info msg="CreateContainer within sandbox \"0f1266cd1afa8c643f4b69cacbd62f87086d4d659c59f6af89b64285d5f8de46\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3167984d9bdcc7807b8b9c489562a15fd7c4a8c4d5b7d8f316db335fe96b7ac2\"" Jul 7 00:00:50.760505 containerd[2083]: time="2025-07-07T00:00:50.760404718Z" level=info msg="StartContainer for \"3167984d9bdcc7807b8b9c489562a15fd7c4a8c4d5b7d8f316db335fe96b7ac2\"" Jul 7 00:00:50.788139 containerd[2083]: time="2025-07-07T00:00:50.785608539Z" level=info msg="CreateContainer within sandbox \"b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ffdab1b986f070afbd24d84856a214872a799131fcfa68a1fb58a2720485ea1b\"" Jul 7 00:00:50.794735 containerd[2083]: time="2025-07-07T00:00:50.794175045Z" level=info msg="StartContainer for \"ffdab1b986f070afbd24d84856a214872a799131fcfa68a1fb58a2720485ea1b\"" Jul 7 00:00:50.823119 containerd[2083]: time="2025-07-07T00:00:50.822955417Z" level=info msg="StartContainer for \"05796f862589c0dfe06f177edac5e64da952d1dc654251acda5c7326bd807fb1\" returns successfully" Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.661 [INFO][5646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.663 [INFO][5646] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" iface="eth0" netns="/var/run/netns/cni-b429ccd8-0041-671e-ddec-604a38e4c2bb" Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.663 [INFO][5646] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" iface="eth0" netns="/var/run/netns/cni-b429ccd8-0041-671e-ddec-604a38e4c2bb" Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.664 [INFO][5646] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" iface="eth0" netns="/var/run/netns/cni-b429ccd8-0041-671e-ddec-604a38e4c2bb" Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.664 [INFO][5646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.664 [INFO][5646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.807 [INFO][5772] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" HandleID="k8s-pod-network.fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.807 [INFO][5772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.807 [INFO][5772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.840 [WARNING][5772] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" HandleID="k8s-pod-network.fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.840 [INFO][5772] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" HandleID="k8s-pod-network.fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.844 [INFO][5772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:50.863467 containerd[2083]: 2025-07-07 00:00:50.850 [INFO][5646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:00:50.864892 containerd[2083]: time="2025-07-07T00:00:50.863617043Z" level=info msg="TearDown network for sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\" successfully" Jul 7 00:00:50.864892 containerd[2083]: time="2025-07-07T00:00:50.863685190Z" level=info msg="StopPodSandbox for \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\" returns successfully" Jul 7 00:00:50.866200 containerd[2083]: time="2025-07-07T00:00:50.866154225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674b869996-5z2gh,Uid:833c9c7e-23d5-495b-bc31-3bfc82fc6450,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:00:51.036976 containerd[2083]: time="2025-07-07T00:00:51.036610057Z" level=info msg="StartContainer for \"3167984d9bdcc7807b8b9c489562a15fd7c4a8c4d5b7d8f316db335fe96b7ac2\" returns successfully" Jul 7 00:00:51.050791 containerd[2083]: time="2025-07-07T00:00:51.050737166Z" level=info msg="StartContainer for \"ffdab1b986f070afbd24d84856a214872a799131fcfa68a1fb58a2720485ea1b\" returns successfully" Jul 7 00:00:51.246445 systemd-networkd[1648]: cali6598e8ffb54: Link UP Jul 7 00:00:51.248356 systemd-networkd[1648]: cali6598e8ffb54: Gained carrier Jul 7 00:00:51.267006 systemd-networkd[1648]: calic2f301a877d: Gained IPv6LL Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.073 [INFO][5833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0 calico-apiserver-674b869996- calico-apiserver 833c9c7e-23d5-495b-bc31-3bfc82fc6450 1040 0 2025-07-07 00:00:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:674b869996 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-107 calico-apiserver-674b869996-5z2gh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6598e8ffb54 [] [] }} ContainerID="6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-5z2gh" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.074 [INFO][5833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-5z2gh" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.148 [INFO][5876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" HandleID="k8s-pod-network.6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.149 [INFO][5876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" HandleID="k8s-pod-network.6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-107", "pod":"calico-apiserver-674b869996-5z2gh", "timestamp":"2025-07-07 00:00:51.14860568 +0000 UTC"}, Hostname:"ip-172-31-19-107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.149 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.149 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.149 [INFO][5876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-107' Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.159 [INFO][5876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" host="ip-172-31-19-107" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.167 [INFO][5876] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-107" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.188 [INFO][5876] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.195 [INFO][5876] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.199 [INFO][5876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ip-172-31-19-107" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.199 [INFO][5876] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" host="ip-172-31-19-107" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.203 [INFO][5876] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035 Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.211 [INFO][5876] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" host="ip-172-31-19-107" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.228 [INFO][5876] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.72/26] block=192.168.66.64/26 handle="k8s-pod-network.6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" host="ip-172-31-19-107" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.228 [INFO][5876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.72/26] handle="k8s-pod-network.6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" host="ip-172-31-19-107" Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.228 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:51.290272 containerd[2083]: 2025-07-07 00:00:51.228 [INFO][5876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.72/26] IPv6=[] ContainerID="6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" HandleID="k8s-pod-network.6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:00:51.297694 containerd[2083]: 2025-07-07 00:00:51.234 [INFO][5833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-5z2gh" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0", GenerateName:"calico-apiserver-674b869996-", Namespace:"calico-apiserver", SelfLink:"", UID:"833c9c7e-23d5-495b-bc31-3bfc82fc6450", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674b869996", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"", Pod:"calico-apiserver-674b869996-5z2gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6598e8ffb54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:51.297694 containerd[2083]: 2025-07-07 00:00:51.235 [INFO][5833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.72/32] ContainerID="6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-5z2gh" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:00:51.297694 containerd[2083]: 2025-07-07 00:00:51.235 [INFO][5833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6598e8ffb54 ContainerID="6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-5z2gh" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:00:51.297694 containerd[2083]: 2025-07-07 00:00:51.249 [INFO][5833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-5z2gh" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:00:51.297694 containerd[2083]: 2025-07-07 00:00:51.250 [INFO][5833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-5z2gh" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0", GenerateName:"calico-apiserver-674b869996-", Namespace:"calico-apiserver", SelfLink:"", UID:"833c9c7e-23d5-495b-bc31-3bfc82fc6450", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674b869996", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035", Pod:"calico-apiserver-674b869996-5z2gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6598e8ffb54", MAC:"b6:fa:2c:5a:8e:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:51.297694 containerd[2083]: 2025-07-07 00:00:51.284 [INFO][5833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035" Namespace="calico-apiserver" Pod="calico-apiserver-674b869996-5z2gh" WorkloadEndpoint="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:00:51.362418 sshd[5560]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:51.369841 containerd[2083]: time="2025-07-07T00:00:51.366894248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:51.369841 containerd[2083]: time="2025-07-07T00:00:51.366987216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:51.369841 containerd[2083]: time="2025-07-07T00:00:51.367011436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:51.369841 containerd[2083]: time="2025-07-07T00:00:51.368131657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:51.374357 systemd[1]: sshd@9-172.31.19.107:22-147.75.109.163:37458.service: Deactivated successfully. Jul 7 00:00:51.388609 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:00:51.392910 systemd-logind[2061]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:00:51.407034 systemd[1]: Started sshd@10-172.31.19.107:22-147.75.109.163:37470.service - OpenSSH per-connection server daemon (147.75.109.163:37470). Jul 7 00:00:51.409923 systemd-logind[2061]: Removed session 10. Jul 7 00:00:51.536990 containerd[2083]: time="2025-07-07T00:00:51.536920008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674b869996-5z2gh,Uid:833c9c7e-23d5-495b-bc31-3bfc82fc6450,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035\"" Jul 7 00:00:51.564856 systemd[1]: run-netns-cni\x2db429ccd8\x2d0041\x2d671e\x2dddec\x2d604a38e4c2bb.mount: Deactivated successfully. Jul 7 00:00:51.666714 sshd[5928]: Accepted publickey for core from 147.75.109.163 port 37470 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:51.673044 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:51.684116 systemd-logind[2061]: New session 11 of user core. Jul 7 00:00:51.690382 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:00:51.728816 kubelet[3316]: I0707 00:00:51.728740 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-p6qwp" podStartSLOduration=58.728706645 podStartE2EDuration="58.728706645s" podCreationTimestamp="2025-07-06 23:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:51.725496008 +0000 UTC m=+64.874921871" watchObservedRunningTime="2025-07-07 00:00:51.728706645 +0000 UTC m=+64.878132489" Jul 7 00:00:51.814104 kubelet[3316]: I0707 00:00:51.814025 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xlnl6" podStartSLOduration=58.813997409 podStartE2EDuration="58.813997409s" podCreationTimestamp="2025-07-06 23:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:51.777906487 +0000 UTC m=+64.927332337" watchObservedRunningTime="2025-07-07 00:00:51.813997409 +0000 UTC m=+64.963423259" Jul 7 00:00:52.028200 systemd-networkd[1648]: cali55695e46569: Gained IPv6LL Jul 7 00:00:52.032836 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:00:52.031148 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:00:52.031259 systemd-resolved[1982]: Flushed all caches. Jul 7 00:00:52.157640 systemd-networkd[1648]: cali17bcee60d44: Gained IPv6LL Jul 7 00:00:52.305019 sshd[5928]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:52.320924 systemd-logind[2061]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:00:52.321206 systemd[1]: sshd@10-172.31.19.107:22-147.75.109.163:37470.service: Deactivated successfully. Jul 7 00:00:52.335685 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:00:52.344505 systemd-logind[2061]: Removed session 11. Jul 7 00:00:52.351127 systemd[1]: Started sshd@11-172.31.19.107:22-147.75.109.163:37486.service - OpenSSH per-connection server daemon (147.75.109.163:37486). Jul 7 00:00:52.518425 sshd[5969]: Accepted publickey for core from 147.75.109.163 port 37486 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:52.519202 sshd[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:52.526543 systemd-logind[2061]: New session 12 of user core. Jul 7 00:00:52.535119 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:00:52.605302 systemd-networkd[1648]: cali6598e8ffb54: Gained IPv6LL Jul 7 00:00:52.788418 sshd[5969]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:52.794644 systemd-logind[2061]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:00:52.796081 systemd[1]: sshd@11-172.31.19.107:22-147.75.109.163:37486.service: Deactivated successfully. Jul 7 00:00:52.802415 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:00:52.804464 systemd-logind[2061]: Removed session 12. Jul 7 00:00:54.879299 ntpd[2047]: Listen normally on 6 vxlan.calico 192.168.66.64:123 Jul 7 00:00:54.880167 ntpd[2047]: Listen normally on 7 cali90ed208f272 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 7 00:00:54.880760 ntpd[2047]: 7 Jul 00:00:54 ntpd[2047]: Listen normally on 6 vxlan.calico 192.168.66.64:123 Jul 7 00:00:54.880760 ntpd[2047]: 7 Jul 00:00:54 ntpd[2047]: Listen normally on 7 cali90ed208f272 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 7 00:00:54.880760 ntpd[2047]: 7 Jul 00:00:54 ntpd[2047]: Listen normally on 8 vxlan.calico [fe80::6450:96ff:fe8a:d63c%5]:123 Jul 7 00:00:54.880760 ntpd[2047]: 7 Jul 00:00:54 ntpd[2047]: Listen normally on 9 cali1a6bbed993c [fe80::ecee:eeff:feee:eeee%8]:123 Jul 7 00:00:54.880760 ntpd[2047]: 7 Jul 00:00:54 ntpd[2047]: Listen normally on 10 cali8d281e2582f [fe80::ecee:eeff:feee:eeee%9]:123 Jul 7 00:00:54.880760 ntpd[2047]: 7 Jul 00:00:54 ntpd[2047]: Listen normally on 11 cali96eb455d75e [fe80::ecee:eeff:feee:eeee%10]:123 Jul 7 00:00:54.880760 ntpd[2047]: 7 Jul 00:00:54 ntpd[2047]: Listen normally on 12 calic2f301a877d [fe80::ecee:eeff:feee:eeee%11]:123 Jul 7 00:00:54.880760 ntpd[2047]: 7 Jul 00:00:54 ntpd[2047]: Listen normally on 13 cali55695e46569 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 7 00:00:54.880760 ntpd[2047]: 7 Jul 00:00:54 ntpd[2047]: Listen normally on 14 cali17bcee60d44 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 7 00:00:54.880760 ntpd[2047]: 7 Jul 00:00:54 ntpd[2047]: Listen normally on 15 cali6598e8ffb54 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 7 00:00:54.880235 ntpd[2047]: Listen normally on 8 vxlan.calico [fe80::6450:96ff:fe8a:d63c%5]:123 Jul 7 00:00:54.880274 ntpd[2047]: Listen normally on 9 cali1a6bbed993c [fe80::ecee:eeff:feee:eeee%8]:123 Jul 7 00:00:54.880303 ntpd[2047]: Listen normally on 10 cali8d281e2582f [fe80::ecee:eeff:feee:eeee%9]:123 Jul 7 00:00:54.880333 ntpd[2047]: Listen normally on 11 cali96eb455d75e [fe80::ecee:eeff:feee:eeee%10]:123 Jul 7 00:00:54.880385 ntpd[2047]: Listen normally on 12 calic2f301a877d [fe80::ecee:eeff:feee:eeee%11]:123 Jul 7 00:00:54.880436 ntpd[2047]: Listen normally on 13 cali55695e46569 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 7 00:00:54.880476 ntpd[2047]: Listen normally on 14 cali17bcee60d44 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 7 00:00:54.880516 ntpd[2047]: Listen normally on 15 cali6598e8ffb54 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 7 00:00:55.079246 containerd[2083]: time="2025-07-07T00:00:55.079193572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:55.083972 containerd[2083]: time="2025-07-07T00:00:55.083737531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:00:55.086826 containerd[2083]: time="2025-07-07T00:00:55.086780194Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:55.091575 containerd[2083]: time="2025-07-07T00:00:55.091511003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:55.092593 containerd[2083]: time="2025-07-07T00:00:55.092548239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.122121657s" Jul 7 00:00:55.092786 containerd[2083]: time="2025-07-07T00:00:55.092760102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:00:55.094529 containerd[2083]: time="2025-07-07T00:00:55.094497860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:00:55.099919 containerd[2083]: time="2025-07-07T00:00:55.099873765Z" level=info msg="CreateContainer within sandbox \"f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:00:55.135337 containerd[2083]: time="2025-07-07T00:00:55.131887370Z" level=info msg="CreateContainer within sandbox \"f909e71267557346a04fd3e897bd0edb965a947059e7da2ddad7d37debc45d1f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"22e14617606cb218d12cc7fdcc68e4c89fcf0142d34ff2b322024f905eed6fbf\"" Jul 7 00:00:55.143117 containerd[2083]: time="2025-07-07T00:00:55.143059474Z" level=info msg="StartContainer for \"22e14617606cb218d12cc7fdcc68e4c89fcf0142d34ff2b322024f905eed6fbf\"" Jul 7 00:00:55.292999 containerd[2083]: time="2025-07-07T00:00:55.292908716Z" level=info msg="StartContainer for \"22e14617606cb218d12cc7fdcc68e4c89fcf0142d34ff2b322024f905eed6fbf\" returns successfully" Jul 7 00:00:55.838359 kubelet[3316]: I0707 00:00:55.836548 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-674b869996-75pq4" podStartSLOduration=40.990362317 podStartE2EDuration="47.836525271s" podCreationTimestamp="2025-07-07 00:00:08 +0000 UTC" firstStartedPulling="2025-07-07 00:00:48.248044614 +0000 UTC m=+61.397470443" lastFinishedPulling="2025-07-07 00:00:55.094207569 +0000 UTC m=+68.243633397" observedRunningTime="2025-07-07 00:00:55.83500004 +0000 UTC m=+68.984425891" watchObservedRunningTime="2025-07-07 00:00:55.836525271 +0000 UTC m=+68.985951122" Jul 7 00:00:57.824164 systemd[1]: Started sshd@12-172.31.19.107:22-147.75.109.163:53648.service - OpenSSH per-connection server daemon (147.75.109.163:53648). Jul 7 00:00:57.880210 kubelet[3316]: I0707 00:00:57.876545 3316 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:00:57.991748 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:00:57.980265 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:00:57.980327 systemd-resolved[1982]: Flushed all caches. Jul 7 00:00:58.168361 sshd[6071]: Accepted publickey for core from 147.75.109.163 port 53648 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:00:58.174123 sshd[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:58.192687 systemd-logind[2061]: New session 13 of user core. Jul 7 00:00:58.197153 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:00:59.173846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4090910812.mount: Deactivated successfully. Jul 7 00:00:59.688001 sshd[6071]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:59.705101 systemd[1]: sshd@12-172.31.19.107:22-147.75.109.163:53648.service: Deactivated successfully. Jul 7 00:00:59.732152 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:00:59.736440 systemd-logind[2061]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:00:59.754307 systemd-logind[2061]: Removed session 13. Jul 7 00:01:00.028014 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:00.028057 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:00.032693 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:00.846163 containerd[2083]: time="2025-07-07T00:01:00.845760220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:00.849693 containerd[2083]: time="2025-07-07T00:01:00.849416861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:01:00.852158 containerd[2083]: time="2025-07-07T00:01:00.851837800Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:00.856231 containerd[2083]: time="2025-07-07T00:01:00.856173310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:00.857777 containerd[2083]: time="2025-07-07T00:01:00.856896552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.762358506s" Jul 7 00:01:00.857777 containerd[2083]: time="2025-07-07T00:01:00.857777935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:01:00.950965 containerd[2083]: time="2025-07-07T00:01:00.950901216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:01:01.000227 containerd[2083]: time="2025-07-07T00:01:01.000164274Z" level=info msg="CreateContainer within sandbox \"5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:01:01.122720 containerd[2083]: time="2025-07-07T00:01:01.118696322Z" level=info msg="CreateContainer within sandbox \"5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d13d7e932f92f1a9a6dca3c251bee91fb649e1bde58ca5c199b95edcc8f49cfc\"" Jul 7 00:01:01.223464 containerd[2083]: time="2025-07-07T00:01:01.223401030Z" level=info msg="StartContainer for \"d13d7e932f92f1a9a6dca3c251bee91fb649e1bde58ca5c199b95edcc8f49cfc\"" Jul 7 00:01:02.088789 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:02.089132 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:02.089160 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:03.512381 containerd[2083]: time="2025-07-07T00:01:03.512205707Z" level=info msg="StartContainer for \"d13d7e932f92f1a9a6dca3c251bee91fb649e1bde58ca5c199b95edcc8f49cfc\" returns successfully" Jul 7 00:01:04.129091 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:04.125298 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:04.125340 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:04.691068 kubelet[3316]: I0707 00:01:04.659817 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-xq9q9" podStartSLOduration=40.595960151 podStartE2EDuration="52.631721713s" podCreationTimestamp="2025-07-07 00:00:12 +0000 UTC" firstStartedPulling="2025-07-07 00:00:48.88828018 +0000 UTC m=+62.037706027" lastFinishedPulling="2025-07-07 00:01:00.924041741 +0000 UTC m=+74.073467589" observedRunningTime="2025-07-07 00:01:04.625604625 +0000 UTC m=+77.775030475" watchObservedRunningTime="2025-07-07 00:01:04.631721713 +0000 UTC m=+77.781147564" Jul 7 00:01:04.742105 systemd[1]: Started sshd@13-172.31.19.107:22-147.75.109.163:53656.service - OpenSSH per-connection server daemon (147.75.109.163:53656). Jul 7 00:01:05.040846 sshd[6151]: Accepted publickey for core from 147.75.109.163 port 53656 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:05.043216 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:05.057863 systemd-logind[2061]: New session 14 of user core. Jul 7 00:01:05.063954 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:01:06.177120 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:06.172093 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:06.172102 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:06.664011 sshd[6151]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:06.673937 systemd[1]: sshd@13-172.31.19.107:22-147.75.109.163:53656.service: Deactivated successfully. Jul 7 00:01:06.682029 systemd-logind[2061]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:01:06.683504 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:01:06.703461 systemd-logind[2061]: Removed session 14. Jul 7 00:01:06.822985 containerd[2083]: time="2025-07-07T00:01:06.822915311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:06.828185 containerd[2083]: time="2025-07-07T00:01:06.827520715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:01:06.829546 containerd[2083]: time="2025-07-07T00:01:06.829499704Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:06.835416 containerd[2083]: time="2025-07-07T00:01:06.834370390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:06.835584 containerd[2083]: time="2025-07-07T00:01:06.835517057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.884387112s" Jul 7 00:01:06.835584 containerd[2083]: time="2025-07-07T00:01:06.835561327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:01:06.882782 containerd[2083]: time="2025-07-07T00:01:06.881165281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:01:06.909829 systemd[1]: run-containerd-runc-k8s.io-d13d7e932f92f1a9a6dca3c251bee91fb649e1bde58ca5c199b95edcc8f49cfc-runc.UsqhEL.mount: Deactivated successfully. Jul 7 00:01:07.056345 containerd[2083]: time="2025-07-07T00:01:07.055862544Z" level=info msg="CreateContainer within sandbox \"a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:01:07.126941 containerd[2083]: time="2025-07-07T00:01:07.126732633Z" level=info msg="CreateContainer within sandbox \"a30d4f0a30730bc0e64adb603119190f1c6e6eb22d782fe627d233d9a9676cad\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0aeb358699eab97d467e570dda904cb87d90ade7c9e789f8d2d90669fb96fff7\"" Jul 7 00:01:07.176149 containerd[2083]: time="2025-07-07T00:01:07.176106736Z" level=info msg="StartContainer for \"0aeb358699eab97d467e570dda904cb87d90ade7c9e789f8d2d90669fb96fff7\"" Jul 7 00:01:07.882144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210193469.mount: Deactivated successfully. Jul 7 00:01:07.924960 containerd[2083]: time="2025-07-07T00:01:07.923640302Z" level=info msg="StartContainer for \"0aeb358699eab97d467e570dda904cb87d90ade7c9e789f8d2d90669fb96fff7\" returns successfully" Jul 7 00:01:08.222150 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:08.220395 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:08.220422 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:09.191298 kubelet[3316]: I0707 00:01:09.189359 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d699df5cb-rvx8c" podStartSLOduration=38.267786042 podStartE2EDuration="56.189240032s" podCreationTimestamp="2025-07-07 00:00:13 +0000 UTC" firstStartedPulling="2025-07-07 00:00:48.934742408 +0000 UTC m=+62.084168237" lastFinishedPulling="2025-07-07 00:01:06.856196385 +0000 UTC m=+80.005622227" observedRunningTime="2025-07-07 00:01:09.174564509 +0000 UTC m=+82.323990359" watchObservedRunningTime="2025-07-07 00:01:09.189240032 +0000 UTC m=+82.338665882" Jul 7 00:01:09.384522 containerd[2083]: time="2025-07-07T00:01:09.384469341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:09.390878 containerd[2083]: time="2025-07-07T00:01:09.389895075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:01:09.392511 containerd[2083]: time="2025-07-07T00:01:09.392467845Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:09.397681 containerd[2083]: time="2025-07-07T00:01:09.396857617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:09.400397 containerd[2083]: time="2025-07-07T00:01:09.399815754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.518588945s" Jul 7 00:01:09.400397 containerd[2083]: time="2025-07-07T00:01:09.399870909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:01:09.404414 containerd[2083]: time="2025-07-07T00:01:09.404375337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:01:09.406209 containerd[2083]: time="2025-07-07T00:01:09.405772422Z" level=info msg="CreateContainer within sandbox \"4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:01:09.496032 containerd[2083]: time="2025-07-07T00:01:09.495611819Z" level=info msg="CreateContainer within sandbox \"4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4c57b4577f7a0fb4822e862832600fda0034c6debf790cc42e0944c4a321361a\"" Jul 7 00:01:09.498838 containerd[2083]: time="2025-07-07T00:01:09.498784214Z" level=info msg="StartContainer for \"4c57b4577f7a0fb4822e862832600fda0034c6debf790cc42e0944c4a321361a\"" Jul 7 00:01:09.502172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700079702.mount: Deactivated successfully. Jul 7 00:01:09.652625 containerd[2083]: time="2025-07-07T00:01:09.652571397Z" level=info msg="StartContainer for \"4c57b4577f7a0fb4822e862832600fda0034c6debf790cc42e0944c4a321361a\" returns successfully" Jul 7 00:01:10.272159 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:10.270741 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:10.271786 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:11.587165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821531168.mount: Deactivated successfully. Jul 7 00:01:11.606859 containerd[2083]: time="2025-07-07T00:01:11.606705056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:11.609892 containerd[2083]: time="2025-07-07T00:01:11.608923730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:01:11.609892 containerd[2083]: time="2025-07-07T00:01:11.609492024Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:11.614890 containerd[2083]: time="2025-07-07T00:01:11.613704585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:11.614890 containerd[2083]: time="2025-07-07T00:01:11.614486599Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.208530355s" Jul 7 00:01:11.614890 containerd[2083]: time="2025-07-07T00:01:11.614518255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:01:11.618946 containerd[2083]: time="2025-07-07T00:01:11.617147189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:01:11.625184 containerd[2083]: time="2025-07-07T00:01:11.624906797Z" level=info msg="CreateContainer within sandbox \"9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:01:11.646646 containerd[2083]: time="2025-07-07T00:01:11.646371596Z" level=info msg="CreateContainer within sandbox \"9f1f44501d6c270c9fbf8af91d773a3b6a4e364f6c70370e7b32278b134a2f84\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1f71414dfe7d1537f225abe5cf658b070bb9493968f157d1c671db44ff9c5239\"" Jul 7 00:01:11.652526 containerd[2083]: time="2025-07-07T00:01:11.652032088Z" level=info msg="StartContainer for \"1f71414dfe7d1537f225abe5cf658b070bb9493968f157d1c671db44ff9c5239\"" Jul 7 00:01:11.721788 systemd[1]: Started sshd@14-172.31.19.107:22-147.75.109.163:40344.service - OpenSSH per-connection server daemon (147.75.109.163:40344). Jul 7 00:01:11.864265 systemd[1]: run-containerd-runc-k8s.io-1f71414dfe7d1537f225abe5cf658b070bb9493968f157d1c671db44ff9c5239-runc.zbeCQ9.mount: Deactivated successfully. Jul 7 00:01:11.911618 containerd[2083]: time="2025-07-07T00:01:11.911529088Z" level=info msg="StartContainer for \"1f71414dfe7d1537f225abe5cf658b070bb9493968f157d1c671db44ff9c5239\" returns successfully" Jul 7 00:01:11.969711 containerd[2083]: time="2025-07-07T00:01:11.969385944Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:11.971980 containerd[2083]: time="2025-07-07T00:01:11.971885718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:01:11.974295 containerd[2083]: time="2025-07-07T00:01:11.974238880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 355.275815ms" Jul 7 00:01:11.974452 containerd[2083]: time="2025-07-07T00:01:11.974397682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:01:11.976876 containerd[2083]: time="2025-07-07T00:01:11.976558158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:01:11.987527 containerd[2083]: time="2025-07-07T00:01:11.987285098Z" level=info msg="CreateContainer within sandbox \"6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:01:12.014410 containerd[2083]: time="2025-07-07T00:01:12.014332497Z" level=info msg="CreateContainer within sandbox \"6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"22e915a5a66aaa0e5fa920387c5318c476d548b2f5b25c1c75b7f62f0d392fc4\"" Jul 7 00:01:12.019219 containerd[2083]: time="2025-07-07T00:01:12.017057580Z" level=info msg="StartContainer for \"22e915a5a66aaa0e5fa920387c5318c476d548b2f5b25c1c75b7f62f0d392fc4\"" Jul 7 00:01:12.096494 sshd[6336]: Accepted publickey for core from 147.75.109.163 port 40344 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:12.098611 systemd[1]: run-containerd-runc-k8s.io-22e915a5a66aaa0e5fa920387c5318c476d548b2f5b25c1c75b7f62f0d392fc4-runc.m7nzW2.mount: Deactivated successfully. Jul 7 00:01:12.104619 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:12.115759 systemd-logind[2061]: New session 15 of user core. Jul 7 00:01:12.121214 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:01:12.184112 containerd[2083]: time="2025-07-07T00:01:12.183956498Z" level=info msg="StartContainer for \"22e915a5a66aaa0e5fa920387c5318c476d548b2f5b25c1c75b7f62f0d392fc4\" returns successfully" Jul 7 00:01:12.219429 kubelet[3316]: I0707 00:01:12.219319 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c7fdbc78d-zzfkq" podStartSLOduration=2.468617439 podStartE2EDuration="27.219287922s" podCreationTimestamp="2025-07-07 00:00:45 +0000 UTC" firstStartedPulling="2025-07-07 00:00:46.865732887 +0000 UTC m=+60.015158729" lastFinishedPulling="2025-07-07 00:01:11.616403366 +0000 UTC m=+84.765829212" observedRunningTime="2025-07-07 00:01:12.218631284 +0000 UTC m=+85.368057134" watchObservedRunningTime="2025-07-07 00:01:12.219287922 +0000 UTC m=+85.368713773" Jul 7 00:01:14.050861 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:14.044473 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:14.044519 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:14.330046 sshd[6336]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:14.348181 systemd[1]: sshd@14-172.31.19.107:22-147.75.109.163:40344.service: Deactivated successfully. Jul 7 00:01:14.398952 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:01:14.402525 systemd-logind[2061]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:01:14.432118 systemd[1]: Started sshd@15-172.31.19.107:22-147.75.109.163:40360.service - OpenSSH per-connection server daemon (147.75.109.163:40360). Jul 7 00:01:14.437156 systemd-logind[2061]: Removed session 15. Jul 7 00:01:14.688759 sshd[6424]: Accepted publickey for core from 147.75.109.163 port 40360 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:14.691461 sshd[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:14.714757 systemd-logind[2061]: New session 16 of user core. Jul 7 00:01:14.723624 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:01:14.865682 containerd[2083]: time="2025-07-07T00:01:14.862115793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:14.872122 containerd[2083]: time="2025-07-07T00:01:14.872020014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:01:14.896815 containerd[2083]: time="2025-07-07T00:01:14.896757172Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:14.898919 containerd[2083]: time="2025-07-07T00:01:14.898867338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:14.902243 containerd[2083]: time="2025-07-07T00:01:14.902193999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.92558957s" Jul 7 00:01:14.902394 containerd[2083]: time="2025-07-07T00:01:14.902252721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:01:14.909041 containerd[2083]: time="2025-07-07T00:01:14.908994120Z" level=info msg="CreateContainer within sandbox \"4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:01:14.965768 containerd[2083]: time="2025-07-07T00:01:14.964279221Z" level=info msg="CreateContainer within sandbox \"4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2fc558ffa4939fbf9d57a3a63620e3178bf3a6a31735056789213c20cfc6fb2e\"" Jul 7 00:01:14.971188 containerd[2083]: time="2025-07-07T00:01:14.968024928Z" level=info msg="StartContainer for \"2fc558ffa4939fbf9d57a3a63620e3178bf3a6a31735056789213c20cfc6fb2e\"" Jul 7 00:01:14.975706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539659629.mount: Deactivated successfully. Jul 7 00:01:15.181784 systemd[1]: run-containerd-runc-k8s.io-2fc558ffa4939fbf9d57a3a63620e3178bf3a6a31735056789213c20cfc6fb2e-runc.uby9fu.mount: Deactivated successfully. Jul 7 00:01:15.271607 containerd[2083]: time="2025-07-07T00:01:15.271463901Z" level=info msg="StartContainer for \"2fc558ffa4939fbf9d57a3a63620e3178bf3a6a31735056789213c20cfc6fb2e\" returns successfully" Jul 7 00:01:15.675956 sshd[6424]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:15.707057 systemd[1]: Started sshd@16-172.31.19.107:22-147.75.109.163:40376.service - OpenSSH per-connection server daemon (147.75.109.163:40376). Jul 7 00:01:15.716503 systemd[1]: sshd@15-172.31.19.107:22-147.75.109.163:40360.service: Deactivated successfully. Jul 7 00:01:15.731456 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:01:15.736054 systemd-logind[2061]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:01:15.753746 systemd-logind[2061]: Removed session 16. Jul 7 00:01:15.954232 sshd[6471]: Accepted publickey for core from 147.75.109.163 port 40376 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:15.959224 sshd[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:15.973548 systemd-logind[2061]: New session 17 of user core. Jul 7 00:01:15.983078 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:01:16.097365 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:16.096738 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:16.096748 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:16.468096 kubelet[3316]: I0707 00:01:16.466173 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-674b869996-5z2gh" podStartSLOduration=48.031930276 podStartE2EDuration="1m8.466137722s" podCreationTimestamp="2025-07-07 00:00:08 +0000 UTC" firstStartedPulling="2025-07-07 00:00:51.541871209 +0000 UTC m=+64.691297050" lastFinishedPulling="2025-07-07 00:01:11.976078588 +0000 UTC m=+85.125504496" observedRunningTime="2025-07-07 00:01:12.348287871 +0000 UTC m=+85.497713721" watchObservedRunningTime="2025-07-07 00:01:16.466137722 +0000 UTC m=+89.615563573" Jul 7 00:01:17.090147 kubelet[3316]: I0707 00:01:17.077623 3316 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:01:17.104747 kubelet[3316]: I0707 00:01:17.104700 3316 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:01:18.141983 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:18.141653 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:18.141694 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:20.197835 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:20.188625 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:20.188687 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:21.354097 sshd[6471]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:21.444629 systemd[1]: Started sshd@17-172.31.19.107:22-147.75.109.163:48752.service - OpenSSH per-connection server daemon (147.75.109.163:48752). Jul 7 00:01:21.451208 systemd[1]: sshd@16-172.31.19.107:22-147.75.109.163:40376.service: Deactivated successfully. Jul 7 00:01:21.461497 systemd-logind[2061]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:01:21.461678 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:01:21.487434 systemd-logind[2061]: Removed session 17. Jul 7 00:01:21.750338 sshd[6499]: Accepted publickey for core from 147.75.109.163 port 48752 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:21.754959 sshd[6499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:21.763003 kubelet[3316]: I0707 00:01:21.733847 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vmlwg" podStartSLOduration=44.108685293 podStartE2EDuration="1m8.678248004s" podCreationTimestamp="2025-07-07 00:00:13 +0000 UTC" firstStartedPulling="2025-07-07 00:00:50.334005143 +0000 UTC m=+63.483430980" lastFinishedPulling="2025-07-07 00:01:14.90356785 +0000 UTC m=+88.052993691" observedRunningTime="2025-07-07 00:01:16.46761447 +0000 UTC m=+89.617040322" watchObservedRunningTime="2025-07-07 00:01:21.678248004 +0000 UTC m=+94.827673866" Jul 7 00:01:21.765739 systemd-logind[2061]: New session 18 of user core. Jul 7 00:01:21.774079 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:01:22.241028 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:22.236652 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:22.236712 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:24.298357 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:24.290191 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:24.290206 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:26.102086 sshd[6499]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:26.172700 systemd[1]: Started sshd@18-172.31.19.107:22-147.75.109.163:35728.service - OpenSSH per-connection server daemon (147.75.109.163:35728). Jul 7 00:01:26.193696 systemd[1]: sshd@17-172.31.19.107:22-147.75.109.163:48752.service: Deactivated successfully. Jul 7 00:01:26.216021 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:01:26.217722 systemd-logind[2061]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:01:26.229274 systemd-logind[2061]: Removed session 18. Jul 7 00:01:26.343894 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:26.342175 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:26.342187 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:26.531692 sshd[6545]: Accepted publickey for core from 147.75.109.163 port 35728 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:26.541106 sshd[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:26.585935 systemd-logind[2061]: New session 19 of user core. Jul 7 00:01:26.592842 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:01:28.384551 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:28.408844 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:28.408863 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:29.979602 kubelet[3316]: E0707 00:01:29.976005 3316 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.616s" Jul 7 00:01:30.430905 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:30.428398 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:30.428408 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:32.403444 sshd[6545]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:32.479891 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:32.478090 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:32.478124 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:32.482794 systemd[1]: sshd@18-172.31.19.107:22-147.75.109.163:35728.service: Deactivated successfully. Jul 7 00:01:32.489926 systemd-logind[2061]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:01:32.490524 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:01:32.500342 systemd-logind[2061]: Removed session 19. Jul 7 00:01:34.527569 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:34.527103 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:34.527115 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:34.608858 systemd[1]: run-containerd-runc-k8s.io-d13d7e932f92f1a9a6dca3c251bee91fb649e1bde58ca5c199b95edcc8f49cfc-runc.YbJ8R7.mount: Deactivated successfully. Jul 7 00:01:36.572183 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:36.574550 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:36.572192 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:37.470129 systemd[1]: Started sshd@19-172.31.19.107:22-147.75.109.163:60608.service - OpenSSH per-connection server daemon (147.75.109.163:60608). Jul 7 00:01:37.958123 sshd[6640]: Accepted publickey for core from 147.75.109.163 port 60608 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:37.962207 sshd[6640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:38.034024 systemd-logind[2061]: New session 20 of user core. Jul 7 00:01:38.040200 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:01:38.620762 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:38.623390 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:38.620773 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:39.896575 sshd[6640]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:39.925389 systemd[1]: sshd@19-172.31.19.107:22-147.75.109.163:60608.service: Deactivated successfully. Jul 7 00:01:39.947031 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:01:39.948005 systemd-logind[2061]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:01:39.958737 systemd-logind[2061]: Removed session 20. Jul 7 00:01:40.672078 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:40.672324 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:40.672357 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:42.721233 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:42.716018 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:42.716053 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:44.977156 systemd[1]: Started sshd@20-172.31.19.107:22-147.75.109.163:60618.service - OpenSSH per-connection server daemon (147.75.109.163:60618). Jul 7 00:01:45.333640 sshd[6655]: Accepted publickey for core from 147.75.109.163 port 60618 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:45.339040 sshd[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:45.350794 systemd-logind[2061]: New session 21 of user core. Jul 7 00:01:45.359157 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:01:47.854867 sshd[6655]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:47.867136 systemd[1]: sshd@20-172.31.19.107:22-147.75.109.163:60618.service: Deactivated successfully. Jul 7 00:01:47.903491 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:01:47.904742 systemd-logind[2061]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:01:47.939773 systemd-logind[2061]: Removed session 21. Jul 7 00:01:48.031896 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:48.027943 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:48.027954 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:48.415082 containerd[2083]: time="2025-07-07T00:01:48.399030478Z" level=info msg="StopPodSandbox for \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\"" Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.373 [WARNING][6678] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe", ResourceVersion:"1345", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3", Pod:"goldmane-58fd7646b9-xq9q9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8d281e2582f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.375 [INFO][6678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.375 [INFO][6678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" iface="eth0" netns="" Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.375 [INFO][6678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.375 [INFO][6678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.883 [INFO][6685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" HandleID="k8s-pod-network.1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.902 [INFO][6685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.902 [INFO][6685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.925 [WARNING][6685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" HandleID="k8s-pod-network.1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.925 [INFO][6685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" HandleID="k8s-pod-network.1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.928 [INFO][6685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:49.963222 containerd[2083]: 2025-07-07 00:01:49.932 [INFO][6678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:01:50.002465 containerd[2083]: time="2025-07-07T00:01:49.994105766Z" level=info msg="TearDown network for sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\" successfully" Jul 7 00:01:50.002465 containerd[2083]: time="2025-07-07T00:01:50.002299460Z" level=info msg="StopPodSandbox for \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\" returns successfully" Jul 7 00:01:50.055995 containerd[2083]: time="2025-07-07T00:01:50.055933206Z" level=info msg="RemovePodSandbox for \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\"" Jul 7 00:01:50.081945 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:50.078950 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:50.078961 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:50.100803 containerd[2083]: time="2025-07-07T00:01:50.100426791Z" level=info msg="Forcibly stopping sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\"" Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.416 [WARNING][6700] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"561fc67c-cd50-4c5b-b964-b8cb6f5c6bbe", ResourceVersion:"1345", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"5dc795fd116b5823ddc8ec12b94edd79c6cb711839e49fe8b55870ac07f359c3", Pod:"goldmane-58fd7646b9-xq9q9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8d281e2582f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.421 [INFO][6700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.421 [INFO][6700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" iface="eth0" netns="" Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.421 [INFO][6700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.421 [INFO][6700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.825 [INFO][6707] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" HandleID="k8s-pod-network.1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.831 [INFO][6707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.836 [INFO][6707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.908 [WARNING][6707] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" HandleID="k8s-pod-network.1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.908 [INFO][6707] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" HandleID="k8s-pod-network.1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Workload="ip--172--31--19--107-k8s-goldmane--58fd7646b9--xq9q9-eth0" Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.926 [INFO][6707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:50.946897 containerd[2083]: 2025-07-07 00:01:50.936 [INFO][6700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92" Jul 7 00:01:50.958441 containerd[2083]: time="2025-07-07T00:01:50.947957360Z" level=info msg="TearDown network for sandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\" successfully" Jul 7 00:01:51.240700 containerd[2083]: time="2025-07-07T00:01:51.240412859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:51.242516 containerd[2083]: time="2025-07-07T00:01:51.242048438Z" level=info msg="RemovePodSandbox \"1b57fb59f3abd077debaf51ca6659dc70d44d8475ac3e18c8b74fe2c40cb4e92\" returns successfully" Jul 7 00:01:51.270723 containerd[2083]: time="2025-07-07T00:01:51.270632727Z" level=info msg="StopPodSandbox for \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\"" Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.462 [WARNING][6722] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd3bd012-86e5-4807-95d5-ad6901284597", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc", Pod:"csi-node-driver-vmlwg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2f301a877d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.473 [INFO][6722] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.473 [INFO][6722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" iface="eth0" netns="" Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.474 [INFO][6722] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.475 [INFO][6722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.589 [INFO][6737] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" HandleID="k8s-pod-network.80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.590 [INFO][6737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.591 [INFO][6737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.622 [WARNING][6737] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" HandleID="k8s-pod-network.80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.622 [INFO][6737] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" HandleID="k8s-pod-network.80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.625 [INFO][6737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:51.633195 containerd[2083]: 2025-07-07 00:01:51.629 [INFO][6722] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:01:51.633195 containerd[2083]: time="2025-07-07T00:01:51.632428115Z" level=info msg="TearDown network for sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\" successfully" Jul 7 00:01:51.640145 containerd[2083]: time="2025-07-07T00:01:51.637802887Z" level=info msg="StopPodSandbox for \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\" returns successfully" Jul 7 00:01:51.640145 containerd[2083]: time="2025-07-07T00:01:51.638926362Z" level=info msg="RemovePodSandbox for \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\"" Jul 7 00:01:51.645192 containerd[2083]: time="2025-07-07T00:01:51.644986768Z" level=info msg="Forcibly stopping sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\"" Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:51.868 [WARNING][6751] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd3bd012-86e5-4807-95d5-ad6901284597", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"4215ddfea058f17b8b4dcedfbf18e67acec438012183d1c35386b79cf91df7fc", Pod:"csi-node-driver-vmlwg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2f301a877d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:51.870 [INFO][6751] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:51.870 [INFO][6751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" iface="eth0" netns="" Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:51.870 [INFO][6751] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:51.870 [INFO][6751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:52.046 [INFO][6758] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" HandleID="k8s-pod-network.80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:52.049 [INFO][6758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:52.049 [INFO][6758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:52.089 [WARNING][6758] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" HandleID="k8s-pod-network.80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:52.089 [INFO][6758] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" HandleID="k8s-pod-network.80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Workload="ip--172--31--19--107-k8s-csi--node--driver--vmlwg-eth0" Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:52.098 [INFO][6758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:52.109412 containerd[2083]: 2025-07-07 00:01:52.103 [INFO][6751] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56" Jul 7 00:01:52.113683 containerd[2083]: time="2025-07-07T00:01:52.110806331Z" level=info msg="TearDown network for sandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\" successfully" Jul 7 00:01:52.158837 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:52.166176 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:52.166198 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:52.197008 containerd[2083]: time="2025-07-07T00:01:52.194548493Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:52.208853 containerd[2083]: time="2025-07-07T00:01:52.208783310Z" level=info msg="RemovePodSandbox \"80a083f1a5cc3ba20c70d7295a2fd6b0edc8408f0450e923839170586fd4fd56\" returns successfully" Jul 7 00:01:52.307062 containerd[2083]: time="2025-07-07T00:01:52.306936522Z" level=info msg="StopPodSandbox for \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\"" Jul 7 00:01:52.943606 systemd[1]: Started sshd@21-172.31.19.107:22-147.75.109.163:60452.service - OpenSSH per-connection server daemon (147.75.109.163:60452). Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.606 [WARNING][6775] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0", GenerateName:"calico-apiserver-674b869996-", Namespace:"calico-apiserver", SelfLink:"", UID:"833c9c7e-23d5-495b-bc31-3bfc82fc6450", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674b869996", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035", Pod:"calico-apiserver-674b869996-5z2gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6598e8ffb54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.612 [INFO][6775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.612 [INFO][6775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" iface="eth0" netns="" Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.614 [INFO][6775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.614 [INFO][6775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.829 [INFO][6782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" HandleID="k8s-pod-network.fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.834 [INFO][6782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.834 [INFO][6782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.870 [WARNING][6782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" HandleID="k8s-pod-network.fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.870 [INFO][6782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" HandleID="k8s-pod-network.fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.924 [INFO][6782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:53.077004 containerd[2083]: 2025-07-07 00:01:52.982 [INFO][6775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:01:53.131733 containerd[2083]: time="2025-07-07T00:01:53.121107217Z" level=info msg="TearDown network for sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\" successfully" Jul 7 00:01:53.143119 containerd[2083]: time="2025-07-07T00:01:53.143058343Z" level=info msg="StopPodSandbox for \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\" returns successfully" Jul 7 00:01:53.667697 sshd[6789]: Accepted publickey for core from 147.75.109.163 port 60452 ssh2: RSA SHA256:Fg5PNVD0YYTKLtsC41iGPKg9RGs648NnOx0QWGalr+Y Jul 7 00:01:53.702360 sshd[6789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:53.803034 systemd-logind[2061]: New session 22 of user core. Jul 7 00:01:53.807768 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:01:54.179881 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:54.175013 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:54.226555 containerd[2083]: time="2025-07-07T00:01:54.192826404Z" level=info msg="RemovePodSandbox for \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\"" Jul 7 00:01:54.175025 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:54.341759 containerd[2083]: time="2025-07-07T00:01:54.334622138Z" level=info msg="Forcibly stopping sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\"" Jul 7 00:01:55.866693 kubelet[3316]: E0707 00:01:55.818034 3316 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.272s" Jul 7 00:01:56.240028 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:56.233586 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:56.233724 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:56.110 [WARNING][6807] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0", GenerateName:"calico-apiserver-674b869996-", Namespace:"calico-apiserver", SelfLink:"", UID:"833c9c7e-23d5-495b-bc31-3bfc82fc6450", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674b869996", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"6fb99b233e7dc8ca0c1efd435ebeba3c3406fff8b57efce180b453682ca02035", Pod:"calico-apiserver-674b869996-5z2gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6598e8ffb54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:56.126 [INFO][6807] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:56.126 [INFO][6807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" iface="eth0" netns="" Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:56.126 [INFO][6807] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:56.126 [INFO][6807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:56.941 [INFO][6838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" HandleID="k8s-pod-network.fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:56.990 [INFO][6838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:57.001 [INFO][6838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:57.243 [WARNING][6838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" HandleID="k8s-pod-network.fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:57.247 [INFO][6838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" HandleID="k8s-pod-network.fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Workload="ip--172--31--19--107-k8s-calico--apiserver--674b869996--5z2gh-eth0" Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:57.254 [INFO][6838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:57.666690 containerd[2083]: 2025-07-07 00:01:57.410 [INFO][6807] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e" Jul 7 00:01:57.816461 containerd[2083]: time="2025-07-07T00:01:57.796676822Z" level=info msg="TearDown network for sandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\" successfully" Jul 7 00:01:58.044231 containerd[2083]: time="2025-07-07T00:01:58.041481396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:58.044231 containerd[2083]: time="2025-07-07T00:01:58.041701812Z" level=info msg="RemovePodSandbox \"fb9847a771d918f74f93395717fa7a29dc6a88f0b01e7b528b9132fe699b7c3e\" returns successfully" Jul 7 00:01:58.271052 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:01:58.276738 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:01:58.276755 systemd-resolved[1982]: Flushed all caches. Jul 7 00:01:59.830315 containerd[2083]: time="2025-07-07T00:01:59.824044306Z" level=info msg="StopPodSandbox for \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\"" Jul 7 00:02:00.318331 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:02:00.331720 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:02:00.318344 systemd-resolved[1982]: Flushed all caches. Jul 7 00:02:00.886232 kubelet[3316]: E0707 00:02:00.747421 3316 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.274s" Jul 7 00:02:02.404374 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:02:02.401798 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:02:02.401839 systemd-resolved[1982]: Flushed all caches. Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:01.344 [WARNING][6864] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd", Pod:"coredns-7c65d6cfc9-xlnl6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55695e46569", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:01.357 [INFO][6864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:01.357 [INFO][6864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" iface="eth0" netns="" Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:01.358 [INFO][6864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:01.358 [INFO][6864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:03.282 [INFO][6871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" HandleID="k8s-pod-network.8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:03.290 [INFO][6871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:03.292 [INFO][6871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:03.336 [WARNING][6871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" HandleID="k8s-pod-network.8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:03.337 [INFO][6871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" HandleID="k8s-pod-network.8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:03.350 [INFO][6871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:03.524091 containerd[2083]: 2025-07-07 00:02:03.377 [INFO][6864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:02:03.762372 containerd[2083]: time="2025-07-07T00:02:03.744814881Z" level=info msg="TearDown network for sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\" successfully" Jul 7 00:02:03.762372 containerd[2083]: time="2025-07-07T00:02:03.761838984Z" level=info msg="StopPodSandbox for \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\" returns successfully" Jul 7 00:02:03.918805 containerd[2083]: time="2025-07-07T00:02:03.917143634Z" level=info msg="RemovePodSandbox for \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\"" Jul 7 00:02:03.933626 containerd[2083]: time="2025-07-07T00:02:03.927262542Z" level=info msg="Forcibly stopping sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\"" Jul 7 00:02:04.236787 kubelet[3316]: E0707 00:02:04.026035 3316 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.539s" Jul 7 00:02:04.430192 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:02:04.413105 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:02:04.413117 systemd-resolved[1982]: Flushed all caches. Jul 7 00:02:04.570687 sshd[6789]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:04.805952 systemd[1]: sshd@21-172.31.19.107:22-147.75.109.163:60452.service: Deactivated successfully. Jul 7 00:02:04.820144 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:02:04.823915 systemd-logind[2061]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:02:04.870109 systemd-logind[2061]: Removed session 22. Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:05.545 [WARNING][6885] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e6c58c9-1e4d-4fb6-9bf1-ad7b4521fb7e", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-107", ContainerID:"b65c321a511dbabf0df93cdb81982f02c3205a405b3d06c231eac6890e1792fd", Pod:"coredns-7c65d6cfc9-xlnl6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55695e46569", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:05.549 [INFO][6885] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:05.549 [INFO][6885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" iface="eth0" netns="" Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:05.549 [INFO][6885] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:05.549 [INFO][6885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:06.064 [INFO][6931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" HandleID="k8s-pod-network.8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:06.083 [INFO][6931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:06.090 [INFO][6931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:06.165 [WARNING][6931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" HandleID="k8s-pod-network.8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:06.165 [INFO][6931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" HandleID="k8s-pod-network.8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Workload="ip--172--31--19--107-k8s-coredns--7c65d6cfc9--xlnl6-eth0" Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:06.168 [INFO][6931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:06.286365 containerd[2083]: 2025-07-07 00:02:06.190 [INFO][6885] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b" Jul 7 00:02:06.470384 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:02:06.465809 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:02:06.465822 systemd-resolved[1982]: Flushed all caches. Jul 7 00:02:06.502672 containerd[2083]: time="2025-07-07T00:02:06.470517910Z" level=info msg="TearDown network for sandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\" successfully" Jul 7 00:02:06.689752 containerd[2083]: time="2025-07-07T00:02:06.688527404Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:02:06.700684 containerd[2083]: time="2025-07-07T00:02:06.697842286Z" level=info msg="RemovePodSandbox \"8dd14bf1411beb4ebb98563ce2fb88f49e20344e703047a40518899355975e1b\" returns successfully" Jul 7 00:02:06.822705 containerd[2083]: time="2025-07-07T00:02:06.820199420Z" level=info msg="StopPodSandbox for \"f27fa6a581cdff01c995afe28c668d659743aa45f05ef4ec1c2f95e97812ec6b\"" Jul 7 00:02:07.762180 systemd[1]: run-containerd-runc-k8s.io-0aeb358699eab97d467e570dda904cb87d90ade7c9e789f8d2d90669fb96fff7-runc.Yt7s62.mount: Deactivated successfully. Jul 7 00:02:08.516041 systemd-journald[1569]: Under memory pressure, flushing caches. Jul 7 00:02:08.515755 systemd-resolved[1982]: Under memory pressure, flushing caches. Jul 7 00:02:08.515767 systemd-resolved[1982]: Flushed all caches. Jul 7 00:02:18.994037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-197bb42de4bca5ce2f7aeb4206c296bf1689332c2ba444e1b3a6aa4128870d9e-rootfs.mount: Deactivated successfully. Jul 7 00:02:19.028637 containerd[2083]: time="2025-07-07T00:02:19.019819947Z" level=info msg="shim disconnected" id=197bb42de4bca5ce2f7aeb4206c296bf1689332c2ba444e1b3a6aa4128870d9e namespace=k8s.io Jul 7 00:02:19.028637 containerd[2083]: time="2025-07-07T00:02:19.028637208Z" level=warning msg="cleaning up after shim disconnected" id=197bb42de4bca5ce2f7aeb4206c296bf1689332c2ba444e1b3a6aa4128870d9e namespace=k8s.io Jul 7 00:02:19.028637 containerd[2083]: time="2025-07-07T00:02:19.028672196Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:19.787219 kubelet[3316]: I0707 00:02:19.787098 3316 scope.go:117] "RemoveContainer" containerID="197bb42de4bca5ce2f7aeb4206c296bf1689332c2ba444e1b3a6aa4128870d9e" Jul 7 00:02:19.876480 containerd[2083]: time="2025-07-07T00:02:19.876410317Z" level=info msg="CreateContainer within sandbox \"5d1d36d1a4dc9374996c6c322a5d2076430ae3c568cbdb992d9eefccf1489f95\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 7 00:02:19.957255 containerd[2083]: time="2025-07-07T00:02:19.957194331Z" level=info msg="CreateContainer within sandbox \"5d1d36d1a4dc9374996c6c322a5d2076430ae3c568cbdb992d9eefccf1489f95\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"529af79e4593e6544afe685ce025f43b5db1fff9eb3d1c944b5579d38e14f4d2\"" Jul 7 00:02:19.957893 containerd[2083]: time="2025-07-07T00:02:19.957768909Z" level=info msg="StartContainer for \"529af79e4593e6544afe685ce025f43b5db1fff9eb3d1c944b5579d38e14f4d2\"" Jul 7 00:02:20.066889 containerd[2083]: time="2025-07-07T00:02:20.066342856Z" level=info msg="StartContainer for \"529af79e4593e6544afe685ce025f43b5db1fff9eb3d1c944b5579d38e14f4d2\" returns successfully" Jul 7 00:02:20.182932 containerd[2083]: time="2025-07-07T00:02:20.182845349Z" level=info msg="shim disconnected" id=1eed787538f7f49f84d142c306ed2ff8760a2129297e205a07b027720fcf5605 namespace=k8s.io Jul 7 00:02:20.182932 containerd[2083]: time="2025-07-07T00:02:20.182927061Z" level=warning msg="cleaning up after shim disconnected" id=1eed787538f7f49f84d142c306ed2ff8760a2129297e205a07b027720fcf5605 namespace=k8s.io Jul 7 00:02:20.182932 containerd[2083]: time="2025-07-07T00:02:20.182939526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:20.190467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1eed787538f7f49f84d142c306ed2ff8760a2129297e205a07b027720fcf5605-rootfs.mount: Deactivated successfully. Jul 7 00:02:20.729934 kubelet[3316]: I0707 00:02:20.729649 3316 scope.go:117] "RemoveContainer" containerID="1eed787538f7f49f84d142c306ed2ff8760a2129297e205a07b027720fcf5605" Jul 7 00:02:20.743755 containerd[2083]: time="2025-07-07T00:02:20.743689864Z" level=info msg="CreateContainer within sandbox \"97989de53c1732043c3296a41d90821c422b0938b737f096956ca1f497ee7701\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 00:02:20.771267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617222928.mount: Deactivated successfully. Jul 7 00:02:20.777909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522733248.mount: Deactivated successfully. Jul 7 00:02:20.779515 containerd[2083]: time="2025-07-07T00:02:20.779273936Z" level=info msg="CreateContainer within sandbox \"97989de53c1732043c3296a41d90821c422b0938b737f096956ca1f497ee7701\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3bea8dcc1da8dd735e931664d91dd660ed7a9d1d2e0a4cea8caf9f3a98cb81de\"" Jul 7 00:02:20.780271 containerd[2083]: time="2025-07-07T00:02:20.780238798Z" level=info msg="StartContainer for \"3bea8dcc1da8dd735e931664d91dd660ed7a9d1d2e0a4cea8caf9f3a98cb81de\"" Jul 7 00:02:20.885191 containerd[2083]: time="2025-07-07T00:02:20.884559382Z" level=info msg="StartContainer for \"3bea8dcc1da8dd735e931664d91dd660ed7a9d1d2e0a4cea8caf9f3a98cb81de\" returns successfully" Jul 7 00:02:23.715073 systemd[1]: run-containerd-runc-k8s.io-9c27421f980fe44badff996becf84f24ac2f603a8554225c00c3fbcd748fd532-runc.qg17AL.mount: Deactivated successfully. Jul 7 00:02:24.624634 kubelet[3316]: E0707 00:02:24.619993 3316 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-107?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 7 00:02:25.421188 containerd[2083]: time="2025-07-07T00:02:25.420902827Z" level=info msg="shim disconnected" id=636cfbb17f269e5742e0bad4dbbd5575539f7f8b1a0d3f48b176f15723a364c8 namespace=k8s.io Jul 7 00:02:25.421188 containerd[2083]: time="2025-07-07T00:02:25.421002545Z" level=warning msg="cleaning up after shim disconnected" id=636cfbb17f269e5742e0bad4dbbd5575539f7f8b1a0d3f48b176f15723a364c8 namespace=k8s.io Jul 7 00:02:25.421188 containerd[2083]: time="2025-07-07T00:02:25.421017505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:25.423443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-636cfbb17f269e5742e0bad4dbbd5575539f7f8b1a0d3f48b176f15723a364c8-rootfs.mount: Deactivated successfully. Jul 7 00:02:25.469473 containerd[2083]: time="2025-07-07T00:02:25.468461222Z" level=warning msg="cleanup warnings time=\"2025-07-07T00:02:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 00:02:25.761431 kubelet[3316]: I0707 00:02:25.761293 3316 scope.go:117] "RemoveContainer" containerID="636cfbb17f269e5742e0bad4dbbd5575539f7f8b1a0d3f48b176f15723a364c8" Jul 7 00:02:25.764500 containerd[2083]: time="2025-07-07T00:02:25.764456455Z" level=info msg="CreateContainer within sandbox \"405562eccb247a8bba0bb732cfe33d23fe95a4f3edff0837d04a7ec32b4e0bc3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 00:02:25.804804 containerd[2083]: time="2025-07-07T00:02:25.804741915Z" level=info msg="CreateContainer within sandbox \"405562eccb247a8bba0bb732cfe33d23fe95a4f3edff0837d04a7ec32b4e0bc3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c6316a0fd8362fc36ac632f89057b48f1564f8fb204acbac4a6c458f6c73e6ca\"" Jul 7 00:02:25.807265 containerd[2083]: time="2025-07-07T00:02:25.805425530Z" level=info msg="StartContainer for \"c6316a0fd8362fc36ac632f89057b48f1564f8fb204acbac4a6c458f6c73e6ca\"" Jul 7 00:02:25.903251 containerd[2083]: time="2025-07-07T00:02:25.903180938Z" level=info msg="StartContainer for \"c6316a0fd8362fc36ac632f89057b48f1564f8fb204acbac4a6c458f6c73e6ca\" returns successfully" Jul 7 00:02:31.952756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-529af79e4593e6544afe685ce025f43b5db1fff9eb3d1c944b5579d38e14f4d2-rootfs.mount: Deactivated successfully. Jul 7 00:02:31.957970 containerd[2083]: time="2025-07-07T00:02:31.955181924Z" level=info msg="shim disconnected" id=529af79e4593e6544afe685ce025f43b5db1fff9eb3d1c944b5579d38e14f4d2 namespace=k8s.io Jul 7 00:02:31.957970 containerd[2083]: time="2025-07-07T00:02:31.955787607Z" level=warning msg="cleaning up after shim disconnected" id=529af79e4593e6544afe685ce025f43b5db1fff9eb3d1c944b5579d38e14f4d2 namespace=k8s.io Jul 7 00:02:31.957970 containerd[2083]: time="2025-07-07T00:02:31.955800015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:31.974072 containerd[2083]: time="2025-07-07T00:02:31.974009686Z" level=warning msg="cleanup warnings time=\"2025-07-07T00:02:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 00:02:32.877728 kubelet[3316]: I0707 00:02:32.877580 3316 scope.go:117] "RemoveContainer" containerID="197bb42de4bca5ce2f7aeb4206c296bf1689332c2ba444e1b3a6aa4128870d9e" Jul 7 00:02:32.878312 kubelet[3316]: I0707 00:02:32.878060 3316 scope.go:117] "RemoveContainer" containerID="529af79e4593e6544afe685ce025f43b5db1fff9eb3d1c944b5579d38e14f4d2" Jul 7 00:02:32.895021 kubelet[3316]: E0707 00:02:32.883567 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-tp5nx_tigera-operator(f9698c10-f42f-484d-a151-e6595d5d8bbf)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-tp5nx" podUID="f9698c10-f42f-484d-a151-e6595d5d8bbf" Jul 7 00:02:32.990710 containerd[2083]: time="2025-07-07T00:02:32.990641480Z" level=info msg="RemoveContainer for \"197bb42de4bca5ce2f7aeb4206c296bf1689332c2ba444e1b3a6aa4128870d9e\"" Jul 7 00:02:33.047799 containerd[2083]: time="2025-07-07T00:02:33.047726899Z" level=info msg="RemoveContainer for \"197bb42de4bca5ce2f7aeb4206c296bf1689332c2ba444e1b3a6aa4128870d9e\" returns successfully" Jul 7 00:02:34.625811 kubelet[3316]: E0707 00:02:34.625368 3316 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-107?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"