Jul 15 05:15:10.925923 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 03:28:48 -00 2025 Jul 15 05:15:10.925963 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:15:10.925977 kernel: BIOS-provided physical RAM map: Jul 15 05:15:10.925988 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 05:15:10.925997 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 15 05:15:10.926007 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 15 05:15:10.926020 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 15 05:15:10.926031 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 15 05:15:10.926044 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 15 05:15:10.926053 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 15 05:15:10.926064 kernel: NX (Execute Disable) protection: active Jul 15 05:15:10.926074 kernel: APIC: Static calls initialized Jul 15 05:15:10.926085 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jul 15 05:15:10.926096 kernel: extended physical RAM map: Jul 15 05:15:10.926114 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 05:15:10.926126 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jul 15 05:15:10.926137 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jul 15 05:15:10.926149 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jul 15 05:15:10.926161 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 15 05:15:10.926173 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 15 05:15:10.926185 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 15 05:15:10.926196 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 15 05:15:10.926208 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 15 05:15:10.926219 kernel: efi: EFI v2.7 by EDK II Jul 15 05:15:10.926235 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 15 05:15:10.926247 kernel: secureboot: Secure boot disabled Jul 15 05:15:10.926258 kernel: SMBIOS 2.7 present. Jul 15 05:15:10.926269 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 15 05:15:10.926279 kernel: DMI: Memory slots populated: 1/1 Jul 15 05:15:10.926291 kernel: Hypervisor detected: KVM Jul 15 05:15:10.926302 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 05:15:10.926312 kernel: kvm-clock: using sched offset of 5606499486 cycles Jul 15 05:15:10.926325 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 05:15:10.926339 kernel: tsc: Detected 2500.006 MHz processor Jul 15 05:15:10.926351 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 05:15:10.926368 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 05:15:10.926381 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 15 05:15:10.926395 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 15 05:15:10.926408 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 05:15:10.926422 kernel: Using GB pages for direct mapping Jul 15 05:15:10.926441 kernel: ACPI: Early table checksum verification disabled Jul 15 05:15:10.926457 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 15 05:15:10.926471 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 15 05:15:10.926485 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 15 05:15:10.926500 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 15 05:15:10.926513 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 15 05:15:10.926528 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 15 05:15:10.926542 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 15 05:15:10.926557 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 15 05:15:10.926573 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 15 05:15:10.926588 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 15 05:15:10.926601 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 15 05:15:10.926615 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 15 05:15:10.926629 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 15 05:15:10.926643 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 15 05:15:10.926657 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 15 05:15:10.927017 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 15 05:15:10.927042 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 15 05:15:10.927055 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 15 05:15:10.927068 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 15 05:15:10.927081 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 15 05:15:10.927093 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 15 05:15:10.927106 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 15 05:15:10.927119 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 15 05:15:10.927132 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 15 05:15:10.927144 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 15 05:15:10.927157 kernel: NUMA: Initialized distance table, cnt=1 Jul 15 05:15:10.927171 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Jul 15 05:15:10.927183 kernel: Zone ranges: Jul 15 05:15:10.927195 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 05:15:10.927208 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 15 05:15:10.927222 kernel: Normal empty Jul 15 05:15:10.927234 kernel: Device empty Jul 15 05:15:10.927247 kernel: Movable zone start for each node Jul 15 05:15:10.927258 kernel: Early memory node ranges Jul 15 05:15:10.927270 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 15 05:15:10.927282 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 15 05:15:10.927298 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 15 05:15:10.927310 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 15 05:15:10.927323 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 05:15:10.927336 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 15 05:15:10.927348 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 15 05:15:10.927360 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 15 05:15:10.927373 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 15 05:15:10.927386 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 05:15:10.927399 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 15 05:15:10.927415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 05:15:10.927428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 05:15:10.927441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 05:15:10.927453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 05:15:10.927466 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 05:15:10.927479 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 05:15:10.927492 kernel: TSC deadline timer available Jul 15 05:15:10.927505 kernel: CPU topo: Max. logical packages: 1 Jul 15 05:15:10.927517 kernel: CPU topo: Max. logical dies: 1 Jul 15 05:15:10.927533 kernel: CPU topo: Max. dies per package: 1 Jul 15 05:15:10.927545 kernel: CPU topo: Max. threads per core: 2 Jul 15 05:15:10.927558 kernel: CPU topo: Num. cores per package: 1 Jul 15 05:15:10.927570 kernel: CPU topo: Num. threads per package: 2 Jul 15 05:15:10.927583 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 05:15:10.927596 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 05:15:10.927608 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 15 05:15:10.927621 kernel: Booting paravirtualized kernel on KVM Jul 15 05:15:10.927634 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 05:15:10.927650 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 05:15:10.927662 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 05:15:10.927688 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 05:15:10.927701 kernel: pcpu-alloc: [0] 0 1 Jul 15 05:15:10.927713 kernel: kvm-guest: PV spinlocks enabled Jul 15 05:15:10.927726 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 05:15:10.927741 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:15:10.927755 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 05:15:10.927771 kernel: random: crng init done Jul 15 05:15:10.927784 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 05:15:10.927797 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 15 05:15:10.927810 kernel: Fallback order for Node 0: 0 Jul 15 05:15:10.927823 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jul 15 05:15:10.927836 kernel: Policy zone: DMA32 Jul 15 05:15:10.927860 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 05:15:10.927876 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 05:15:10.927890 kernel: Kernel/User page tables isolation: enabled Jul 15 05:15:10.927904 kernel: ftrace: allocating 40097 entries in 157 pages Jul 15 05:15:10.927917 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 05:15:10.927931 kernel: Dynamic Preempt: voluntary Jul 15 05:15:10.927947 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 05:15:10.927962 kernel: rcu: RCU event tracing is enabled. Jul 15 05:15:10.927976 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 05:15:10.927989 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 05:15:10.928003 kernel: Rude variant of Tasks RCU enabled. Jul 15 05:15:10.928019 kernel: Tracing variant of Tasks RCU enabled. Jul 15 05:15:10.928033 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 05:15:10.928046 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 05:15:10.928060 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:15:10.928074 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:15:10.928087 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:15:10.928101 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 15 05:15:10.928115 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 05:15:10.928128 kernel: Console: colour dummy device 80x25 Jul 15 05:15:10.928145 kernel: printk: legacy console [tty0] enabled Jul 15 05:15:10.928158 kernel: printk: legacy console [ttyS0] enabled Jul 15 05:15:10.928172 kernel: ACPI: Core revision 20240827 Jul 15 05:15:10.928186 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 15 05:15:10.928199 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 05:15:10.928213 kernel: x2apic enabled Jul 15 05:15:10.928226 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 05:15:10.928240 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Jul 15 05:15:10.928254 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006) Jul 15 05:15:10.928271 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 15 05:15:10.928285 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 15 05:15:10.928298 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 05:15:10.928312 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 05:15:10.928325 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 05:15:10.928338 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 15 05:15:10.928352 kernel: RETBleed: Vulnerable Jul 15 05:15:10.928365 kernel: Speculative Store Bypass: Vulnerable Jul 15 05:15:10.928378 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 05:15:10.928392 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 05:15:10.928408 kernel: GDS: Unknown: Dependent on hypervisor status Jul 15 05:15:10.928421 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 15 05:15:10.928434 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 05:15:10.928447 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 05:15:10.928461 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 05:15:10.928474 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 15 05:15:10.928487 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 15 05:15:10.928501 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 15 05:15:10.928514 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 15 05:15:10.928527 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 15 05:15:10.928540 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 15 05:15:10.928556 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 05:15:10.928570 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 15 05:15:10.928583 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 15 05:15:10.928596 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 15 05:15:10.928609 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 15 05:15:10.928622 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 15 05:15:10.928636 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 15 05:15:10.928649 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 15 05:15:10.928662 kernel: Freeing SMP alternatives memory: 32K Jul 15 05:15:10.928683 kernel: pid_max: default: 32768 minimum: 301 Jul 15 05:15:10.928698 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 05:15:10.928715 kernel: landlock: Up and running. Jul 15 05:15:10.928728 kernel: SELinux: Initializing. Jul 15 05:15:10.928741 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 15 05:15:10.928755 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 15 05:15:10.928768 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 15 05:15:10.928781 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 15 05:15:10.928794 kernel: signal: max sigframe size: 3632 Jul 15 05:15:10.928808 kernel: rcu: Hierarchical SRCU implementation. Jul 15 05:15:10.928821 kernel: rcu: Max phase no-delay instances is 400. Jul 15 05:15:10.928834 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 05:15:10.928850 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 15 05:15:10.928864 kernel: smp: Bringing up secondary CPUs ... Jul 15 05:15:10.928878 kernel: smpboot: x86: Booting SMP configuration: Jul 15 05:15:10.928891 kernel: .... node #0, CPUs: #1 Jul 15 05:15:10.928904 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 15 05:15:10.928918 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 15 05:15:10.928931 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 05:15:10.928945 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS) Jul 15 05:15:10.928958 kernel: Memory: 1908052K/2037804K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 125188K reserved, 0K cma-reserved) Jul 15 05:15:10.928973 kernel: devtmpfs: initialized Jul 15 05:15:10.928986 kernel: x86/mm: Memory block size: 128MB Jul 15 05:15:10.929000 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 15 05:15:10.929016 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 05:15:10.929484 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 05:15:10.929501 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 05:15:10.929517 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 05:15:10.929532 kernel: audit: initializing netlink subsys (disabled) Jul 15 05:15:10.929548 kernel: audit: type=2000 audit(1752556508.189:1): state=initialized audit_enabled=0 res=1 Jul 15 05:15:10.929567 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 05:15:10.929583 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 05:15:10.929598 kernel: cpuidle: using governor menu Jul 15 05:15:10.929612 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 05:15:10.929635 kernel: dca service started, version 1.12.1 Jul 15 05:15:10.929649 kernel: PCI: Using configuration type 1 for base access Jul 15 05:15:10.929662 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 05:15:10.929698 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 05:15:10.929711 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 05:15:10.929728 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 05:15:10.929744 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 05:15:10.929759 kernel: ACPI: Added _OSI(Module Device) Jul 15 05:15:10.929774 kernel: ACPI: Added _OSI(Processor Device) Jul 15 05:15:10.929789 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 05:15:10.929804 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 15 05:15:10.929820 kernel: ACPI: Interpreter enabled Jul 15 05:15:10.929835 kernel: ACPI: PM: (supports S0 S5) Jul 15 05:15:10.929849 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 05:15:10.929866 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 05:15:10.929879 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 05:15:10.929894 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 15 05:15:10.929909 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 05:15:10.930131 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 15 05:15:10.930282 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 15 05:15:10.930426 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 15 05:15:10.930452 kernel: acpiphp: Slot [3] registered Jul 15 05:15:10.930469 kernel: acpiphp: Slot [4] registered Jul 15 05:15:10.930485 kernel: acpiphp: Slot [5] registered Jul 15 05:15:10.930501 kernel: acpiphp: Slot [6] registered Jul 15 05:15:10.930517 kernel: acpiphp: Slot [7] registered Jul 15 05:15:10.930533 kernel: acpiphp: Slot [8] registered Jul 15 05:15:10.930550 kernel: acpiphp: Slot [9] registered Jul 15 05:15:10.930567 kernel: acpiphp: Slot [10] registered Jul 15 05:15:10.930582 kernel: acpiphp: Slot [11] registered Jul 15 05:15:10.930603 kernel: acpiphp: Slot [12] registered Jul 15 05:15:10.930620 kernel: acpiphp: Slot [13] registered Jul 15 05:15:10.930636 kernel: acpiphp: Slot [14] registered Jul 15 05:15:10.930652 kernel: acpiphp: Slot [15] registered Jul 15 05:15:10.930668 kernel: acpiphp: Slot [16] registered Jul 15 05:15:10.930696 kernel: acpiphp: Slot [17] registered Jul 15 05:15:10.930707 kernel: acpiphp: Slot [18] registered Jul 15 05:15:10.930720 kernel: acpiphp: Slot [19] registered Jul 15 05:15:10.930733 kernel: acpiphp: Slot [20] registered Jul 15 05:15:10.930746 kernel: acpiphp: Slot [21] registered Jul 15 05:15:10.930763 kernel: acpiphp: Slot [22] registered Jul 15 05:15:10.930778 kernel: acpiphp: Slot [23] registered Jul 15 05:15:10.930793 kernel: acpiphp: Slot [24] registered Jul 15 05:15:10.930808 kernel: acpiphp: Slot [25] registered Jul 15 05:15:10.930822 kernel: acpiphp: Slot [26] registered Jul 15 05:15:10.930837 kernel: acpiphp: Slot [27] registered Jul 15 05:15:10.930852 kernel: acpiphp: Slot [28] registered Jul 15 05:15:10.930867 kernel: acpiphp: Slot [29] registered Jul 15 05:15:10.930881 kernel: acpiphp: Slot [30] registered Jul 15 05:15:10.930899 kernel: acpiphp: Slot [31] registered Jul 15 05:15:10.930914 kernel: PCI host bridge to bus 0000:00 Jul 15 05:15:10.931081 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 05:15:10.931207 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 05:15:10.931326 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 05:15:10.931441 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 15 05:15:10.931553 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 15 05:15:10.931666 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 05:15:10.931892 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 15 05:15:10.933748 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 15 05:15:10.933919 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jul 15 05:15:10.934058 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 15 05:15:10.934187 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 15 05:15:10.934314 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 15 05:15:10.934436 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 15 05:15:10.934558 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 15 05:15:10.934698 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 15 05:15:10.934824 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 15 05:15:10.934958 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 05:15:10.935098 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jul 15 05:15:10.935236 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 15 05:15:10.935366 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 05:15:10.935508 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jul 15 05:15:10.935654 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jul 15 05:15:10.937867 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jul 15 05:15:10.938007 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jul 15 05:15:10.938028 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 05:15:10.938048 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 05:15:10.938064 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 05:15:10.938078 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 05:15:10.938095 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 15 05:15:10.938111 kernel: iommu: Default domain type: Translated Jul 15 05:15:10.938126 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 05:15:10.938142 kernel: efivars: Registered efivars operations Jul 15 05:15:10.938158 kernel: PCI: Using ACPI for IRQ routing Jul 15 05:15:10.938174 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 05:15:10.938192 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jul 15 05:15:10.938206 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 15 05:15:10.938219 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 15 05:15:10.938360 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 15 05:15:10.938489 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 15 05:15:10.938615 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 05:15:10.938634 kernel: vgaarb: loaded Jul 15 05:15:10.938649 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 15 05:15:10.938708 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 15 05:15:10.938725 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 05:15:10.938739 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 05:15:10.938755 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 05:15:10.938769 kernel: pnp: PnP ACPI init Jul 15 05:15:10.938785 kernel: pnp: PnP ACPI: found 5 devices Jul 15 05:15:10.938799 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 05:15:10.938814 kernel: NET: Registered PF_INET protocol family Jul 15 05:15:10.938829 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 05:15:10.938848 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 15 05:15:10.938863 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 05:15:10.938879 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 05:15:10.938893 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 15 05:15:10.938909 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 15 05:15:10.938923 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 15 05:15:10.938939 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 15 05:15:10.938954 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 05:15:10.938969 kernel: NET: Registered PF_XDP protocol family Jul 15 05:15:10.939106 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 05:15:10.939224 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 05:15:10.939344 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 05:15:10.939463 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 15 05:15:10.939579 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 15 05:15:10.939789 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 15 05:15:10.939811 kernel: PCI: CLS 0 bytes, default 64 Jul 15 05:15:10.939828 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 15 05:15:10.939848 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Jul 15 05:15:10.939862 kernel: clocksource: Switched to clocksource tsc Jul 15 05:15:10.939876 kernel: Initialise system trusted keyrings Jul 15 05:15:10.939894 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 15 05:15:10.939910 kernel: Key type asymmetric registered Jul 15 05:15:10.939922 kernel: Asymmetric key parser 'x509' registered Jul 15 05:15:10.939937 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 05:15:10.939953 kernel: io scheduler mq-deadline registered Jul 15 05:15:10.939969 kernel: io scheduler kyber registered Jul 15 05:15:10.939987 kernel: io scheduler bfq registered Jul 15 05:15:10.940004 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 05:15:10.940021 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 05:15:10.940037 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:15:10.940054 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 05:15:10.940070 kernel: i8042: Warning: Keylock active Jul 15 05:15:10.940086 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 05:15:10.940103 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 05:15:10.940254 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 15 05:15:10.940386 kernel: rtc_cmos 00:00: registered as rtc0 Jul 15 05:15:10.940519 kernel: rtc_cmos 00:00: setting system clock to 2025-07-15T05:15:10 UTC (1752556510) Jul 15 05:15:10.940654 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 15 05:15:10.940690 kernel: intel_pstate: CPU model not supported Jul 15 05:15:10.940736 kernel: efifb: probing for efifb Jul 15 05:15:10.940760 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jul 15 05:15:10.940780 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 15 05:15:10.940797 kernel: efifb: scrolling: redraw Jul 15 05:15:10.940817 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 05:15:10.940833 kernel: Console: switching to colour frame buffer device 100x37 Jul 15 05:15:10.940849 kernel: fb0: EFI VGA frame buffer device Jul 15 05:15:10.940864 kernel: pstore: Using crash dump compression: deflate Jul 15 05:15:10.940881 kernel: pstore: Registered efi_pstore as persistent store backend Jul 15 05:15:10.940898 kernel: NET: Registered PF_INET6 protocol family Jul 15 05:15:10.940913 kernel: Segment Routing with IPv6 Jul 15 05:15:10.940930 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 05:15:10.940946 kernel: NET: Registered PF_PACKET protocol family Jul 15 05:15:10.940965 kernel: Key type dns_resolver registered Jul 15 05:15:10.940982 kernel: IPI shorthand broadcast: enabled Jul 15 05:15:10.940999 kernel: sched_clock: Marking stable (2735077131, 148147819)->(2978416896, -95191946) Jul 15 05:15:10.941016 kernel: registered taskstats version 1 Jul 15 05:15:10.941032 kernel: Loading compiled-in X.509 certificates Jul 15 05:15:10.941049 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: a24478b628e55368911ce1800a2bd6bc158938c7' Jul 15 05:15:10.941065 kernel: Demotion targets for Node 0: null Jul 15 05:15:10.941082 kernel: Key type .fscrypt registered Jul 15 05:15:10.941098 kernel: Key type fscrypt-provisioning registered Jul 15 05:15:10.941117 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 05:15:10.941134 kernel: ima: Allocated hash algorithm: sha1 Jul 15 05:15:10.941151 kernel: ima: No architecture policies found Jul 15 05:15:10.941167 kernel: clk: Disabling unused clocks Jul 15 05:15:10.941184 kernel: Warning: unable to open an initial console. Jul 15 05:15:10.941201 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 15 05:15:10.941217 kernel: Write protecting the kernel read-only data: 24576k Jul 15 05:15:10.941235 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 05:15:10.941254 kernel: Run /init as init process Jul 15 05:15:10.941274 kernel: with arguments: Jul 15 05:15:10.941290 kernel: /init Jul 15 05:15:10.941306 kernel: with environment: Jul 15 05:15:10.941322 kernel: HOME=/ Jul 15 05:15:10.941338 kernel: TERM=linux Jul 15 05:15:10.941357 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 05:15:10.941375 systemd[1]: Successfully made /usr/ read-only. Jul 15 05:15:10.941398 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:15:10.941416 systemd[1]: Detected virtualization amazon. Jul 15 05:15:10.941436 systemd[1]: Detected architecture x86-64. Jul 15 05:15:10.941452 systemd[1]: Running in initrd. Jul 15 05:15:10.941469 systemd[1]: No hostname configured, using default hostname. Jul 15 05:15:10.941490 systemd[1]: Hostname set to . Jul 15 05:15:10.941508 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:15:10.941525 systemd[1]: Queued start job for default target initrd.target. Jul 15 05:15:10.941542 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:15:10.941559 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:15:10.941579 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 05:15:10.941596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:15:10.941613 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 05:15:10.941645 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 05:15:10.941664 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 05:15:10.942586 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 05:15:10.942607 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:15:10.942626 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:15:10.942645 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:15:10.942663 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:15:10.942702 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:15:10.942720 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:15:10.942737 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:15:10.942755 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:15:10.942772 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 05:15:10.942790 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 05:15:10.942807 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:15:10.942825 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:15:10.942843 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:15:10.942862 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:15:10.942880 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 05:15:10.942897 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:15:10.942915 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 05:15:10.942933 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 05:15:10.942950 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 05:15:10.942968 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:15:10.942986 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:15:10.943007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:10.943024 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 05:15:10.943073 systemd-journald[207]: Collecting audit messages is disabled. Jul 15 05:15:10.943116 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:15:10.943134 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 05:15:10.943153 systemd-journald[207]: Journal started Jul 15 05:15:10.943192 systemd-journald[207]: Runtime Journal (/run/log/journal/ec2fadff0ed727aee5cfad14770b9cfb) is 4.8M, max 38.4M, 33.6M free. Jul 15 05:15:10.948744 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:15:10.963073 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:15:10.967069 systemd-modules-load[208]: Inserted module 'overlay' Jul 15 05:15:10.969435 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:15:10.975735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:10.984658 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 05:15:10.990090 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:15:10.996893 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:15:11.008059 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 05:15:11.009499 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 05:15:11.015876 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:15:11.025792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:15:11.035059 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 05:15:11.035092 kernel: Bridge firewalling registered Jul 15 05:15:11.031262 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 15 05:15:11.035163 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:15:11.039309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:15:11.043394 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:15:11.050834 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 05:15:11.057363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:15:11.061896 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:15:11.074615 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:15:11.121931 systemd-resolved[247]: Positive Trust Anchors: Jul 15 05:15:11.121948 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:15:11.122009 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:15:11.131898 systemd-resolved[247]: Defaulting to hostname 'linux'. Jul 15 05:15:11.133281 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:15:11.134751 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:15:11.174713 kernel: SCSI subsystem initialized Jul 15 05:15:11.183712 kernel: Loading iSCSI transport class v2.0-870. Jul 15 05:15:11.194703 kernel: iscsi: registered transport (tcp) Jul 15 05:15:11.216985 kernel: iscsi: registered transport (qla4xxx) Jul 15 05:15:11.217073 kernel: QLogic iSCSI HBA Driver Jul 15 05:15:11.235464 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:15:11.251025 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:15:11.251920 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:15:11.299279 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 05:15:11.301388 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 05:15:11.358727 kernel: raid6: avx512x4 gen() 18083 MB/s Jul 15 05:15:11.376701 kernel: raid6: avx512x2 gen() 18128 MB/s Jul 15 05:15:11.394723 kernel: raid6: avx512x1 gen() 18039 MB/s Jul 15 05:15:11.412703 kernel: raid6: avx2x4 gen() 17600 MB/s Jul 15 05:15:11.430704 kernel: raid6: avx2x2 gen() 15013 MB/s Jul 15 05:15:11.449281 kernel: raid6: avx2x1 gen() 13491 MB/s Jul 15 05:15:11.449362 kernel: raid6: using algorithm avx512x2 gen() 18128 MB/s Jul 15 05:15:11.467947 kernel: raid6: .... xor() 24309 MB/s, rmw enabled Jul 15 05:15:11.468022 kernel: raid6: using avx512x2 recovery algorithm Jul 15 05:15:11.489713 kernel: xor: automatically using best checksumming function avx Jul 15 05:15:11.657712 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 05:15:11.664220 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:15:11.666330 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:15:11.696913 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jul 15 05:15:11.703576 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:15:11.707420 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 05:15:11.730084 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jul 15 05:15:11.756213 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:15:11.758187 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:15:11.811780 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:15:11.817464 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 05:15:11.898703 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 05:15:11.917915 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 15 05:15:11.918190 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 15 05:15:11.933699 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 15 05:15:11.940844 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jul 15 05:15:11.948100 kernel: AES CTR mode by8 optimization enabled Jul 15 05:15:11.948163 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:93:e7:d6:a0:d5 Jul 15 05:15:11.955389 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 15 05:15:11.955634 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 15 05:15:11.973422 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 15 05:15:11.981110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:15:11.981353 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:11.983239 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:11.984952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:12.000096 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 05:15:12.000138 kernel: GPT:9289727 != 16777215 Jul 15 05:15:12.000157 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 05:15:12.000175 kernel: GPT:9289727 != 16777215 Jul 15 05:15:12.000193 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 05:15:12.000211 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:15:11.989432 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:15:12.003372 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:15:12.027653 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:12.042697 kernel: nvme nvme0: using unchecked data buffer Jul 15 05:15:12.148695 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 15 05:15:12.157913 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 15 05:15:12.158488 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 15 05:15:12.159768 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 05:15:12.178962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 05:15:12.189448 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 15 05:15:12.190092 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:15:12.191103 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:15:12.192184 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:15:12.194245 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 05:15:12.198838 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 05:15:12.217481 disk-uuid[696]: Primary Header is updated. Jul 15 05:15:12.217481 disk-uuid[696]: Secondary Entries is updated. Jul 15 05:15:12.217481 disk-uuid[696]: Secondary Header is updated. Jul 15 05:15:12.223704 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:15:12.227165 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:15:13.237960 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:15:13.239595 disk-uuid[699]: The operation has completed successfully. Jul 15 05:15:13.371338 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 05:15:13.371478 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 05:15:13.409329 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 05:15:13.423446 sh[964]: Success Jul 15 05:15:13.444418 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 05:15:13.444720 kernel: device-mapper: uevent: version 1.0.3 Jul 15 05:15:13.444756 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 05:15:13.457751 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 15 05:15:13.547084 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 05:15:13.549825 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 05:15:13.559312 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 05:15:13.583698 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 05:15:13.586723 kernel: BTRFS: device fsid eb96c768-dac4-4ca9-ae1d-82815d4ce00b devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (987) Jul 15 05:15:13.589849 kernel: BTRFS info (device dm-0): first mount of filesystem eb96c768-dac4-4ca9-ae1d-82815d4ce00b Jul 15 05:15:13.589908 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:13.592181 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 05:15:13.686883 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 05:15:13.687847 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:15:13.688384 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 05:15:13.689143 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 05:15:13.690890 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 05:15:13.726698 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:9) scanned by mount (1021) Jul 15 05:15:13.731205 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:13.731276 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:13.734767 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:15:13.755733 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:13.756947 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 05:15:13.760582 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 05:15:13.789489 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:15:13.792084 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:15:13.828840 systemd-networkd[1156]: lo: Link UP Jul 15 05:15:13.828851 systemd-networkd[1156]: lo: Gained carrier Jul 15 05:15:13.830250 systemd-networkd[1156]: Enumeration completed Jul 15 05:15:13.830360 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:15:13.830860 systemd[1]: Reached target network.target - Network. Jul 15 05:15:13.830884 systemd-networkd[1156]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:13.830888 systemd-networkd[1156]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:15:13.834048 systemd-networkd[1156]: eth0: Link UP Jul 15 05:15:13.834056 systemd-networkd[1156]: eth0: Gained carrier Jul 15 05:15:13.834069 systemd-networkd[1156]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:13.848775 systemd-networkd[1156]: eth0: DHCPv4 address 172.31.23.74/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 05:15:14.262860 ignition[1121]: Ignition 2.21.0 Jul 15 05:15:14.262875 ignition[1121]: Stage: fetch-offline Jul 15 05:15:14.263279 ignition[1121]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:14.263288 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:14.265806 ignition[1121]: Ignition finished successfully Jul 15 05:15:14.267181 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:15:14.269207 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 05:15:14.293847 ignition[1165]: Ignition 2.21.0 Jul 15 05:15:14.293862 ignition[1165]: Stage: fetch Jul 15 05:15:14.294168 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:14.294177 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:14.294265 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:14.334021 ignition[1165]: PUT result: OK Jul 15 05:15:14.339883 ignition[1165]: parsed url from cmdline: "" Jul 15 05:15:14.339894 ignition[1165]: no config URL provided Jul 15 05:15:14.339908 ignition[1165]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:15:14.339922 ignition[1165]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:15:14.339949 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:14.341219 ignition[1165]: PUT result: OK Jul 15 05:15:14.341273 ignition[1165]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 15 05:15:14.343621 ignition[1165]: GET result: OK Jul 15 05:15:14.343769 ignition[1165]: parsing config with SHA512: 577e5862ec007501f8bebf38234cedd374751c4419d49cb888443ace5bcfb355910f43dc1e5bf8ca8ad003f3291ab649a73e7eddd1d35d19692e1ad3f1e700f8 Jul 15 05:15:14.349879 unknown[1165]: fetched base config from "system" Jul 15 05:15:14.349889 unknown[1165]: fetched base config from "system" Jul 15 05:15:14.350292 ignition[1165]: fetch: fetch complete Jul 15 05:15:14.349894 unknown[1165]: fetched user config from "aws" Jul 15 05:15:14.350299 ignition[1165]: fetch: fetch passed Jul 15 05:15:14.350349 ignition[1165]: Ignition finished successfully Jul 15 05:15:14.352405 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 05:15:14.354230 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 05:15:14.384760 ignition[1172]: Ignition 2.21.0 Jul 15 05:15:14.384775 ignition[1172]: Stage: kargs Jul 15 05:15:14.387153 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:14.387172 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:14.387343 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:14.389316 ignition[1172]: PUT result: OK Jul 15 05:15:14.392084 ignition[1172]: kargs: kargs passed Jul 15 05:15:14.392158 ignition[1172]: Ignition finished successfully Jul 15 05:15:14.393902 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 05:15:14.395780 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 05:15:14.428761 ignition[1179]: Ignition 2.21.0 Jul 15 05:15:14.428777 ignition[1179]: Stage: disks Jul 15 05:15:14.429158 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:14.429172 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:14.429283 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:14.431489 ignition[1179]: PUT result: OK Jul 15 05:15:14.436403 ignition[1179]: disks: disks passed Jul 15 05:15:14.436466 ignition[1179]: Ignition finished successfully Jul 15 05:15:14.437756 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 05:15:14.438558 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 05:15:14.439231 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 05:15:14.439561 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:15:14.440107 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:15:14.440667 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:15:14.442333 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 05:15:14.505357 systemd-fsck[1187]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 05:15:14.508398 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 05:15:14.509909 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 05:15:14.659698 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 277c3938-5262-4ab1-8fa3-62fde82f8257 r/w with ordered data mode. Quota mode: none. Jul 15 05:15:14.660532 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 05:15:14.661450 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 05:15:14.663419 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:15:14.666764 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 05:15:14.667924 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 05:15:14.669092 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 05:15:14.669121 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:15:14.675048 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 05:15:14.677202 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 05:15:14.692700 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:9) scanned by mount (1206) Jul 15 05:15:14.695710 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:14.697770 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:14.697826 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:15:14.704899 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:15:15.154047 initrd-setup-root[1230]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 05:15:15.168850 initrd-setup-root[1237]: cut: /sysroot/etc/group: No such file or directory Jul 15 05:15:15.172890 initrd-setup-root[1244]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 05:15:15.177428 initrd-setup-root[1251]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 05:15:15.469157 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 05:15:15.471587 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 05:15:15.474855 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 05:15:15.491098 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 05:15:15.493697 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:15.523763 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 05:15:15.525428 ignition[1318]: INFO : Ignition 2.21.0 Jul 15 05:15:15.525428 ignition[1318]: INFO : Stage: mount Jul 15 05:15:15.526859 ignition[1318]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:15.526859 ignition[1318]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:15.526859 ignition[1318]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:15.526859 ignition[1318]: INFO : PUT result: OK Jul 15 05:15:15.531890 ignition[1318]: INFO : mount: mount passed Jul 15 05:15:15.532517 ignition[1318]: INFO : Ignition finished successfully Jul 15 05:15:15.534071 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 05:15:15.535585 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 05:15:15.662481 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:15:15.694707 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:9) scanned by mount (1330) Jul 15 05:15:15.697856 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:15.697935 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:15.700320 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:15:15.706946 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:15:15.738824 ignition[1346]: INFO : Ignition 2.21.0 Jul 15 05:15:15.738824 ignition[1346]: INFO : Stage: files Jul 15 05:15:15.739951 ignition[1346]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:15.739951 ignition[1346]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:15.739951 ignition[1346]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:15.740918 ignition[1346]: INFO : PUT result: OK Jul 15 05:15:15.744106 ignition[1346]: DEBUG : files: compiled without relabeling support, skipping Jul 15 05:15:15.744951 ignition[1346]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 05:15:15.744951 ignition[1346]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 05:15:15.749239 ignition[1346]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 05:15:15.750010 ignition[1346]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 05:15:15.750010 ignition[1346]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 05:15:15.749788 unknown[1346]: wrote ssh authorized keys file for user: core Jul 15 05:15:15.752227 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 05:15:15.752872 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 05:15:15.841084 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 05:15:15.853892 systemd-networkd[1156]: eth0: Gained IPv6LL Jul 15 05:15:16.056562 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 05:15:16.056562 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 05:15:16.058476 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 05:15:16.058476 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:15:16.058476 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:15:16.058476 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:15:16.058476 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:15:16.058476 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:15:16.058476 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:15:16.063804 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:15:16.064640 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:15:16.064640 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:15:16.066645 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:15:16.066645 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:15:16.066645 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 05:15:16.702969 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 05:15:19.294949 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:15:19.294949 ignition[1346]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 05:15:19.296964 ignition[1346]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:15:19.301655 ignition[1346]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:15:19.301655 ignition[1346]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 05:15:19.303344 ignition[1346]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 05:15:19.303344 ignition[1346]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 05:15:19.303344 ignition[1346]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:15:19.303344 ignition[1346]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:15:19.303344 ignition[1346]: INFO : files: files passed Jul 15 05:15:19.303344 ignition[1346]: INFO : Ignition finished successfully Jul 15 05:15:19.303389 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 05:15:19.306001 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 05:15:19.309378 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 05:15:19.320176 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 05:15:19.320887 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 05:15:19.338209 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:15:19.338209 initrd-setup-root-after-ignition[1377]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:15:19.341487 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:15:19.341815 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:15:19.343415 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 05:15:19.345326 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 05:15:19.399037 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 05:15:19.399144 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 05:15:19.400208 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 05:15:19.401381 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 05:15:19.402283 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 05:15:19.403139 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 05:15:19.422541 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:15:19.424686 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 05:15:19.446854 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:15:19.447629 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:15:19.448713 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 05:15:19.449545 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 05:15:19.449974 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:15:19.451043 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 05:15:19.451987 systemd[1]: Stopped target basic.target - Basic System. Jul 15 05:15:19.452811 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 05:15:19.453700 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:15:19.454472 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 05:15:19.455309 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:15:19.456100 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 05:15:19.456876 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:15:19.457806 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 05:15:19.458870 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 05:15:19.459639 systemd[1]: Stopped target swap.target - Swaps. Jul 15 05:15:19.460345 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 05:15:19.460574 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:15:19.461789 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:15:19.462566 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:15:19.463241 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 05:15:19.463400 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:15:19.464066 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 05:15:19.464283 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 05:15:19.465265 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 05:15:19.465496 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:15:19.466264 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 05:15:19.466457 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 05:15:19.468768 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 05:15:19.473955 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 05:15:19.475153 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 05:15:19.475365 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:15:19.477564 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 05:15:19.478349 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:15:19.486720 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 05:15:19.486844 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 05:15:19.507702 ignition[1401]: INFO : Ignition 2.21.0 Jul 15 05:15:19.507702 ignition[1401]: INFO : Stage: umount Jul 15 05:15:19.507702 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:19.507702 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:19.511475 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:19.511475 ignition[1401]: INFO : PUT result: OK Jul 15 05:15:19.508078 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 05:15:19.515549 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 05:15:19.515727 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 05:15:19.517948 ignition[1401]: INFO : umount: umount passed Jul 15 05:15:19.517948 ignition[1401]: INFO : Ignition finished successfully Jul 15 05:15:19.519103 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 05:15:19.519254 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 05:15:19.520217 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 05:15:19.520280 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 05:15:19.520720 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 05:15:19.520786 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 05:15:19.521331 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 05:15:19.521384 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 05:15:19.522149 systemd[1]: Stopped target network.target - Network. Jul 15 05:15:19.522731 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 05:15:19.522797 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:15:19.523409 systemd[1]: Stopped target paths.target - Path Units. Jul 15 05:15:19.523999 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 05:15:19.526835 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:15:19.527297 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 05:15:19.528207 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 05:15:19.528854 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 05:15:19.528929 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:15:19.529473 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 05:15:19.529525 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:15:19.530262 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 05:15:19.530347 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 05:15:19.530943 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 05:15:19.531003 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 05:15:19.531588 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 05:15:19.531650 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 05:15:19.532416 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 05:15:19.533056 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 05:15:19.537469 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 05:15:19.537781 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 05:15:19.541191 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 05:15:19.542043 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 05:15:19.542154 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:15:19.545020 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:15:19.545361 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 05:15:19.545494 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 05:15:19.547643 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 05:15:19.548416 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 05:15:19.549306 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 05:15:19.549369 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:15:19.551238 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 05:15:19.551799 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 05:15:19.551869 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:15:19.552493 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:15:19.552552 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:15:19.556664 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 05:15:19.556766 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 05:15:19.557457 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:15:19.562098 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:15:19.576284 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 05:15:19.576736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:15:19.578027 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 05:15:19.578092 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 05:15:19.579892 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 05:15:19.579929 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:15:19.580629 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 05:15:19.580707 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:15:19.581876 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 05:15:19.581925 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 05:15:19.582898 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 05:15:19.582955 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:15:19.585158 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 05:15:19.586038 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 05:15:19.586408 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:15:19.587577 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 05:15:19.587630 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:15:19.589030 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 05:15:19.589076 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:15:19.589534 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 05:15:19.589658 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:15:19.590282 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:15:19.590324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:19.591870 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 05:15:19.593821 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 05:15:19.600915 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 05:15:19.601016 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 05:15:19.602267 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 05:15:19.603781 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 05:15:19.639816 systemd[1]: Switching root. Jul 15 05:15:19.685207 systemd-journald[207]: Journal stopped Jul 15 05:15:21.628021 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 15 05:15:21.628120 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 05:15:21.628142 kernel: SELinux: policy capability open_perms=1 Jul 15 05:15:21.628166 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 05:15:21.628184 kernel: SELinux: policy capability always_check_network=0 Jul 15 05:15:21.628203 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 05:15:21.628222 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 05:15:21.628239 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 05:15:21.628257 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 05:15:21.628277 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 05:15:21.628296 kernel: audit: type=1403 audit(1752556520.119:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 05:15:21.628320 systemd[1]: Successfully loaded SELinux policy in 90.276ms. Jul 15 05:15:21.628357 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.366ms. Jul 15 05:15:21.628384 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:15:21.628404 systemd[1]: Detected virtualization amazon. Jul 15 05:15:21.628424 systemd[1]: Detected architecture x86-64. Jul 15 05:15:21.628444 systemd[1]: Detected first boot. Jul 15 05:15:21.628463 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:15:21.628483 zram_generator::config[1444]: No configuration found. Jul 15 05:15:21.628504 kernel: Guest personality initialized and is inactive Jul 15 05:15:21.628524 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 05:15:21.628543 kernel: Initialized host personality Jul 15 05:15:21.628561 kernel: NET: Registered PF_VSOCK protocol family Jul 15 05:15:21.628580 systemd[1]: Populated /etc with preset unit settings. Jul 15 05:15:21.628602 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 05:15:21.628622 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 05:15:21.628641 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 05:15:21.628661 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 05:15:21.630755 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 05:15:21.630792 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 05:15:21.630812 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 05:15:21.630832 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 05:15:21.630852 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 05:15:21.630872 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 05:15:21.630891 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 05:15:21.630911 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 05:15:21.630930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:15:21.630951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:15:21.630973 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 05:15:21.630999 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 05:15:21.631020 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 05:15:21.631040 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:15:21.631060 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 05:15:21.631082 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:15:21.631101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:15:21.631124 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 05:15:21.631146 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 05:15:21.631165 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 05:15:21.631185 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 05:15:21.631208 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:15:21.631229 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:15:21.631249 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:15:21.631268 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:15:21.631288 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 05:15:21.631314 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 05:15:21.631336 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 05:15:21.631358 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:15:21.631380 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:15:21.631402 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:15:21.631422 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 05:15:21.631443 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 05:15:21.631463 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 05:15:21.631484 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 05:15:21.631508 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:21.631528 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 05:15:21.631549 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 05:15:21.631570 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 05:15:21.631591 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 05:15:21.631616 systemd[1]: Reached target machines.target - Containers. Jul 15 05:15:21.631636 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 05:15:21.631658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:15:21.633844 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:15:21.633888 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 05:15:21.633910 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:15:21.633930 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:15:21.633951 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:15:21.633973 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 05:15:21.633994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:15:21.634017 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 05:15:21.634039 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 05:15:21.634063 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 05:15:21.634085 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 05:15:21.634108 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 05:15:21.634131 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:15:21.634152 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:15:21.634172 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:15:21.634190 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:15:21.634212 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 05:15:21.637632 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 05:15:21.641743 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:15:21.641782 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 05:15:21.641804 systemd[1]: Stopped verity-setup.service. Jul 15 05:15:21.641826 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:21.641856 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 05:15:21.641880 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 05:15:21.641902 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 05:15:21.641923 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 05:15:21.641944 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 05:15:21.641964 kernel: loop: module loaded Jul 15 05:15:21.641989 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 05:15:21.642009 kernel: fuse: init (API version 7.41) Jul 15 05:15:21.642031 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:15:21.642057 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 05:15:21.642080 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 05:15:21.642103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:15:21.642127 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:15:21.642148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:15:21.642171 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:15:21.642196 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 05:15:21.642218 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 05:15:21.642241 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:15:21.642261 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:15:21.642283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:15:21.642305 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:15:21.642326 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 05:15:21.642349 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:15:21.642370 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 05:15:21.642446 systemd-journald[1523]: Collecting audit messages is disabled. Jul 15 05:15:21.642492 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 05:15:21.642515 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 05:15:21.642540 kernel: ACPI: bus type drm_connector registered Jul 15 05:15:21.642561 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:15:21.642584 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 05:15:21.642607 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 05:15:21.642629 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:15:21.642651 systemd-journald[1523]: Journal started Jul 15 05:15:21.644004 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec2fadff0ed727aee5cfad14770b9cfb) is 4.8M, max 38.4M, 33.6M free. Jul 15 05:15:21.227098 systemd[1]: Queued start job for default target multi-user.target. Jul 15 05:15:21.236216 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 15 05:15:21.236739 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 05:15:21.659075 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 05:15:21.659145 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:15:21.673627 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 05:15:21.673748 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:15:21.684712 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:15:21.689696 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 05:15:21.696703 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:15:21.704731 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:15:21.709763 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 05:15:21.712313 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:15:21.712581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:15:21.717071 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 05:15:21.718901 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 05:15:21.720596 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 05:15:21.750856 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 05:15:21.756774 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 05:15:21.759637 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 05:15:21.771932 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 05:15:21.782889 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec2fadff0ed727aee5cfad14770b9cfb is 51.205ms for 1013 entries. Jul 15 05:15:21.782889 systemd-journald[1523]: System Journal (/var/log/journal/ec2fadff0ed727aee5cfad14770b9cfb) is 8M, max 195.6M, 187.6M free. Jul 15 05:15:21.851016 systemd-journald[1523]: Received client request to flush runtime journal. Jul 15 05:15:21.851089 kernel: loop0: detected capacity change from 0 to 146488 Jul 15 05:15:21.820764 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:15:21.832086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:15:21.838099 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Jul 15 05:15:21.838124 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Jul 15 05:15:21.849889 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:15:21.852884 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 05:15:21.855064 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 05:15:21.861884 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 05:15:21.940056 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 05:15:21.944851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:15:21.951887 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 05:15:21.988187 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Jul 15 05:15:21.988216 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Jul 15 05:15:21.994153 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:15:22.000339 kernel: loop1: detected capacity change from 0 to 114000 Jul 15 05:15:22.144731 kernel: loop2: detected capacity change from 0 to 72384 Jul 15 05:15:22.238435 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 05:15:22.257718 kernel: loop3: detected capacity change from 0 to 221472 Jul 15 05:15:22.543698 kernel: loop4: detected capacity change from 0 to 146488 Jul 15 05:15:22.581974 kernel: loop5: detected capacity change from 0 to 114000 Jul 15 05:15:22.616744 kernel: loop6: detected capacity change from 0 to 72384 Jul 15 05:15:22.632621 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 05:15:22.637246 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:15:22.641735 kernel: loop7: detected capacity change from 0 to 221472 Jul 15 05:15:22.668135 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 15 05:15:22.670861 (sd-merge)[1604]: Merged extensions into '/usr'. Jul 15 05:15:22.674024 systemd-udevd[1606]: Using default interface naming scheme 'v255'. Jul 15 05:15:22.676465 systemd[1]: Reload requested from client PID 1559 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 05:15:22.676483 systemd[1]: Reloading... Jul 15 05:15:22.754707 zram_generator::config[1638]: No configuration found. Jul 15 05:15:22.932275 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:15:22.992807 (udev-worker)[1656]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:15:23.058916 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 05:15:23.063716 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 15 05:15:23.067357 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 05:15:23.067547 systemd[1]: Reloading finished in 390 ms. Jul 15 05:15:23.079700 kernel: ACPI: button: Power Button [PWRF] Jul 15 05:15:23.079793 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 15 05:15:23.079813 kernel: ACPI: button: Sleep Button [SLPF] Jul 15 05:15:23.081147 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:15:23.087906 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 15 05:15:23.102625 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:15:23.113234 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 05:15:23.118204 systemd[1]: Starting ensure-sysext.service... Jul 15 05:15:23.121545 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:15:23.142898 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 05:15:23.156804 systemd[1]: Reload requested from client PID 1723 ('systemctl') (unit ensure-sysext.service)... Jul 15 05:15:23.156823 systemd[1]: Reloading... Jul 15 05:15:23.165895 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 05:15:23.165923 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 05:15:23.166215 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 05:15:23.166463 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 05:15:23.173206 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 05:15:23.173475 systemd-tmpfiles[1725]: ACLs are not supported, ignoring. Jul 15 05:15:23.173538 systemd-tmpfiles[1725]: ACLs are not supported, ignoring. Jul 15 05:15:23.185201 systemd-tmpfiles[1725]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:15:23.185215 systemd-tmpfiles[1725]: Skipping /boot Jul 15 05:15:23.215408 systemd-tmpfiles[1725]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:15:23.215425 systemd-tmpfiles[1725]: Skipping /boot Jul 15 05:15:23.265704 zram_generator::config[1762]: No configuration found. Jul 15 05:15:23.453163 systemd-networkd[1721]: lo: Link UP Jul 15 05:15:23.455025 systemd-networkd[1721]: lo: Gained carrier Jul 15 05:15:23.456873 ldconfig[1548]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 05:15:23.457495 systemd-networkd[1721]: Enumeration completed Jul 15 05:15:23.463980 systemd-networkd[1721]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:23.463994 systemd-networkd[1721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:15:23.468886 systemd-networkd[1721]: eth0: Link UP Jul 15 05:15:23.469054 systemd-networkd[1721]: eth0: Gained carrier Jul 15 05:15:23.469084 systemd-networkd[1721]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:23.478746 systemd-networkd[1721]: eth0: DHCPv4 address 172.31.23.74/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 05:15:23.482437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:15:23.619030 systemd[1]: Reloading finished in 461 ms. Jul 15 05:15:23.640718 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 05:15:23.641317 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:15:23.642054 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 05:15:23.654293 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:15:23.679766 systemd[1]: Finished ensure-sysext.service. Jul 15 05:15:23.700290 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 05:15:23.704426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:23.705864 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:15:23.710865 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 05:15:23.711785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:15:23.714080 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:15:23.719439 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:15:23.722571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:15:23.726988 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:15:23.729017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:15:23.732118 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 05:15:23.732956 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:15:23.737496 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 05:15:23.750014 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 05:15:23.753399 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 05:15:23.757952 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:15:23.758608 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 05:15:23.765981 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 05:15:23.773020 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:23.780950 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:23.782302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:15:23.782615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:15:23.784122 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:15:23.785804 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:15:23.787387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:15:23.787605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:15:23.789973 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:15:23.790199 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:15:23.798566 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:15:23.798912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:15:23.831318 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 05:15:23.838950 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 05:15:23.859577 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 05:15:23.876721 augenrules[1948]: No rules Jul 15 05:15:23.877126 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:15:23.877481 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:15:23.883409 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 05:15:23.886665 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 05:15:23.906020 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 05:15:23.908004 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 05:15:23.918713 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 05:15:23.944449 systemd-resolved[1921]: Positive Trust Anchors: Jul 15 05:15:23.944475 systemd-resolved[1921]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:15:23.944524 systemd-resolved[1921]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:15:23.945841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:23.949734 systemd-resolved[1921]: Defaulting to hostname 'linux'. Jul 15 05:15:23.951899 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:15:23.952458 systemd[1]: Reached target network.target - Network. Jul 15 05:15:23.952926 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:15:23.953331 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:15:23.953967 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 05:15:23.954384 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 05:15:23.954791 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 05:15:23.955293 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 05:15:23.955791 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 05:15:23.956156 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 05:15:23.956510 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 05:15:23.956555 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:15:23.956932 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:15:23.958971 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 05:15:23.960856 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 05:15:23.963605 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 05:15:23.964217 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 05:15:23.964630 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 05:15:23.967281 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 05:15:23.968062 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 05:15:23.969172 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 05:15:23.970529 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:15:23.970948 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:15:23.971378 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:15:23.971417 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:15:23.972487 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 05:15:23.975031 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 05:15:23.985208 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 05:15:23.989667 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 05:15:23.994175 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 05:15:23.999894 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 05:15:24.000846 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 05:15:24.024324 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 05:15:24.033957 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 05:15:24.038891 jq[1967]: false Jul 15 05:15:24.040869 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 05:15:24.045870 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 05:15:24.050723 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 05:15:24.059902 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 05:15:24.066568 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 05:15:24.078787 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Refreshing passwd entry cache Jul 15 05:15:24.075480 oslogin_cache_refresh[1969]: Refreshing passwd entry cache Jul 15 05:15:24.083964 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 05:15:24.086864 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 05:15:24.087617 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 05:15:24.090922 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 05:15:24.094727 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 05:15:24.107878 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Failure getting users, quitting Jul 15 05:15:24.107878 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:15:24.107878 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Refreshing group entry cache Jul 15 05:15:24.105732 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 05:15:24.103202 oslogin_cache_refresh[1969]: Failure getting users, quitting Jul 15 05:15:24.106809 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 05:15:24.103225 oslogin_cache_refresh[1969]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:15:24.107072 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 05:15:24.103280 oslogin_cache_refresh[1969]: Refreshing group entry cache Jul 15 05:15:24.116888 extend-filesystems[1968]: Found /dev/nvme0n1p6 Jul 15 05:15:24.117178 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 05:15:24.117450 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 05:15:24.129698 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Failure getting groups, quitting Jul 15 05:15:24.129698 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:15:24.128502 oslogin_cache_refresh[1969]: Failure getting groups, quitting Jul 15 05:15:24.128519 oslogin_cache_refresh[1969]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:15:24.130272 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 05:15:24.131082 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 05:15:24.136787 extend-filesystems[1968]: Found /dev/nvme0n1p9 Jul 15 05:15:24.143252 extend-filesystems[1968]: Checking size of /dev/nvme0n1p9 Jul 15 05:15:24.182464 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 05:15:24.183748 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 05:15:24.192013 extend-filesystems[1968]: Resized partition /dev/nvme0n1p9 Jul 15 05:15:24.192630 jq[1985]: true Jul 15 05:15:24.195457 update_engine[1984]: I20250715 05:15:24.195308 1984 main.cc:92] Flatcar Update Engine starting Jul 15 05:15:24.203634 extend-filesystems[2014]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 05:15:24.208963 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 15 05:15:24.214724 tar[1991]: linux-amd64/helm Jul 15 05:15:24.234534 ntpd[1973]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 03:00:16 UTC 2025 (1): Starting Jul 15 05:15:24.235269 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 03:00:16 UTC 2025 (1): Starting Jul 15 05:15:24.235269 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 05:15:24.235269 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: ---------------------------------------------------- Jul 15 05:15:24.235269 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Jul 15 05:15:24.235269 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 05:15:24.235269 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: corporation. Support and training for ntp-4 are Jul 15 05:15:24.235269 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: available at https://www.nwtime.org/support Jul 15 05:15:24.235269 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: ---------------------------------------------------- Jul 15 05:15:24.234567 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 05:15:24.234577 ntpd[1973]: ---------------------------------------------------- Jul 15 05:15:24.234586 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Jul 15 05:15:24.234594 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 05:15:24.238188 (ntainerd)[2005]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 05:15:24.234603 ntpd[1973]: corporation. Support and training for ntp-4 are Jul 15 05:15:24.251064 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 05:15:24.262270 coreos-metadata[1964]: Jul 15 05:15:24.246 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 05:15:24.262270 coreos-metadata[1964]: Jul 15 05:15:24.248 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 15 05:15:24.262270 coreos-metadata[1964]: Jul 15 05:15:24.255 INFO Fetch successful Jul 15 05:15:24.262270 coreos-metadata[1964]: Jul 15 05:15:24.255 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 15 05:15:24.262270 coreos-metadata[1964]: Jul 15 05:15:24.259 INFO Fetch successful Jul 15 05:15:24.262270 coreos-metadata[1964]: Jul 15 05:15:24.260 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 15 05:15:24.262270 coreos-metadata[1964]: Jul 15 05:15:24.261 INFO Fetch successful Jul 15 05:15:24.262270 coreos-metadata[1964]: Jul 15 05:15:24.261 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 15 05:15:24.234612 ntpd[1973]: available at https://www.nwtime.org/support Jul 15 05:15:24.256795 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 05:15:24.269587 coreos-metadata[1964]: Jul 15 05:15:24.266 INFO Fetch successful Jul 15 05:15:24.269587 coreos-metadata[1964]: Jul 15 05:15:24.267 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 15 05:15:24.269587 coreos-metadata[1964]: Jul 15 05:15:24.269 INFO Fetch failed with 404: resource not found Jul 15 05:15:24.269587 coreos-metadata[1964]: Jul 15 05:15:24.269 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 15 05:15:24.234623 ntpd[1973]: ---------------------------------------------------- Jul 15 05:15:24.270358 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: proto: precision = 0.060 usec (-24) Jul 15 05:15:24.270358 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: basedate set to 2025-07-03 Jul 15 05:15:24.270358 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: gps base set to 2025-07-06 (week 2374) Jul 15 05:15:24.256831 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 05:15:24.250821 dbus-daemon[1965]: [system] SELinux support is enabled Jul 15 05:15:24.273095 coreos-metadata[1964]: Jul 15 05:15:24.271 INFO Fetch successful Jul 15 05:15:24.273095 coreos-metadata[1964]: Jul 15 05:15:24.271 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 15 05:15:24.273095 coreos-metadata[1964]: Jul 15 05:15:24.272 INFO Fetch successful Jul 15 05:15:24.273095 coreos-metadata[1964]: Jul 15 05:15:24.272 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 15 05:15:24.258766 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 05:15:24.266813 ntpd[1973]: proto: precision = 0.060 usec (-24) Jul 15 05:15:24.258792 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 05:15:24.267418 dbus-daemon[1965]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1721 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 05:15:24.268096 ntpd[1973]: basedate set to 2025-07-03 Jul 15 05:15:24.268117 ntpd[1973]: gps base set to 2025-07-06 (week 2374) Jul 15 05:15:24.276461 jq[2016]: true Jul 15 05:15:24.278398 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 05:15:24.278533 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 05:15:24.278533 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 05:15:24.278456 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 05:15:24.279042 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 05:15:24.279282 coreos-metadata[1964]: Jul 15 05:15:24.279 INFO Fetch successful Jul 15 05:15:24.279428 coreos-metadata[1964]: Jul 15 05:15:24.279 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 15 05:15:24.286085 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 05:15:24.286140 ntpd[1973]: Listen normally on 3 eth0 172.31.23.74:123 Jul 15 05:15:24.286266 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 05:15:24.286266 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: Listen normally on 3 eth0 172.31.23.74:123 Jul 15 05:15:24.286266 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: Listen normally on 4 lo [::1]:123 Jul 15 05:15:24.286266 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: bind(21) AF_INET6 fe80::493:e7ff:fed6:a0d5%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 05:15:24.286266 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: unable to create socket on eth0 (5) for fe80::493:e7ff:fed6:a0d5%2#123 Jul 15 05:15:24.286181 ntpd[1973]: Listen normally on 4 lo [::1]:123 Jul 15 05:15:24.302301 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: failed to init interface for address fe80::493:e7ff:fed6:a0d5%2 Jul 15 05:15:24.302301 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: Listening on routing socket on fd #21 for interface updates Jul 15 05:15:24.302301 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:24.302301 ntpd[1973]: 15 Jul 05:15:24 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:24.302469 update_engine[1984]: I20250715 05:15:24.295914 1984 update_check_scheduler.cc:74] Next update check in 8m43s Jul 15 05:15:24.302512 coreos-metadata[1964]: Jul 15 05:15:24.287 INFO Fetch successful Jul 15 05:15:24.302512 coreos-metadata[1964]: Jul 15 05:15:24.287 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 15 05:15:24.302512 coreos-metadata[1964]: Jul 15 05:15:24.293 INFO Fetch successful Jul 15 05:15:24.286236 ntpd[1973]: bind(21) AF_INET6 fe80::493:e7ff:fed6:a0d5%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 05:15:24.286259 ntpd[1973]: unable to create socket on eth0 (5) for fe80::493:e7ff:fed6:a0d5%2#123 Jul 15 05:15:24.286273 ntpd[1973]: failed to init interface for address fe80::493:e7ff:fed6:a0d5%2 Jul 15 05:15:24.286308 ntpd[1973]: Listening on routing socket on fd #21 for interface updates Jul 15 05:15:24.293322 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:24.293357 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:24.307973 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 05:15:24.324148 systemd[1]: Started update-engine.service - Update Engine. Jul 15 05:15:24.331696 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 15 05:15:24.332149 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 05:15:24.350196 extend-filesystems[2014]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 15 05:15:24.350196 extend-filesystems[2014]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 05:15:24.350196 extend-filesystems[2014]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 15 05:15:24.357214 extend-filesystems[1968]: Resized filesystem in /dev/nvme0n1p9 Jul 15 05:15:24.352106 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 05:15:24.353951 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 05:15:24.354759 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 05:15:24.452190 systemd-logind[1981]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 05:15:24.455695 systemd-logind[1981]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 15 05:15:24.455731 systemd-logind[1981]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 05:15:24.455982 systemd-logind[1981]: New seat seat0. Jul 15 05:15:24.457372 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 05:15:24.467586 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 05:15:24.469315 bash[2062]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:15:24.469979 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 05:15:24.472946 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 05:15:24.483057 systemd[1]: Starting sshkeys.service... Jul 15 05:15:24.584989 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 05:15:24.591114 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 05:15:24.630128 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 05:15:24.821177 coreos-metadata[2111]: Jul 15 05:15:24.820 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 05:15:24.823866 coreos-metadata[2111]: Jul 15 05:15:24.823 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 15 05:15:24.824743 coreos-metadata[2111]: Jul 15 05:15:24.824 INFO Fetch successful Jul 15 05:15:24.825037 coreos-metadata[2111]: Jul 15 05:15:24.824 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 15 05:15:24.827721 coreos-metadata[2111]: Jul 15 05:15:24.827 INFO Fetch successful Jul 15 05:15:24.830841 unknown[2111]: wrote ssh authorized keys file for user: core Jul 15 05:15:24.890820 locksmithd[2036]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 05:15:24.915707 update-ssh-keys[2142]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:15:24.921476 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 05:15:24.925600 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 05:15:24.940286 systemd[1]: Finished sshkeys.service. Jul 15 05:15:24.954883 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 05:15:24.964793 dbus-daemon[1965]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2032 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 05:15:24.974055 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 05:15:25.199779 systemd-networkd[1721]: eth0: Gained IPv6LL Jul 15 05:15:25.206902 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 05:15:25.209188 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 05:15:25.217801 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 15 05:15:25.224022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:25.225115 polkitd[2166]: Started polkitd version 126 Jul 15 05:15:25.228232 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 05:15:25.248643 polkitd[2166]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 05:15:25.255402 polkitd[2166]: Loading rules from directory /run/polkit-1/rules.d Jul 15 05:15:25.255476 polkitd[2166]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:15:25.256631 polkitd[2166]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 05:15:25.259752 polkitd[2166]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:15:25.259821 polkitd[2166]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 05:15:25.265126 polkitd[2166]: Finished loading, compiling and executing 2 rules Jul 15 05:15:25.268878 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 05:15:25.274055 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 05:15:25.278556 containerd[2005]: time="2025-07-15T05:15:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 05:15:25.279831 polkitd[2166]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 05:15:25.285548 containerd[2005]: time="2025-07-15T05:15:25.285010087Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 05:15:25.325740 systemd-resolved[1921]: System hostname changed to 'ip-172-31-23-74'. Jul 15 05:15:25.327257 systemd-hostnamed[2032]: Hostname set to (transient) Jul 15 05:15:25.336213 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 05:15:25.357389 containerd[2005]: time="2025-07-15T05:15:25.357332438Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.581µs" Jul 15 05:15:25.362176 containerd[2005]: time="2025-07-15T05:15:25.360859956Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 05:15:25.362176 containerd[2005]: time="2025-07-15T05:15:25.360948082Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 05:15:25.362176 containerd[2005]: time="2025-07-15T05:15:25.361159429Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 05:15:25.362176 containerd[2005]: time="2025-07-15T05:15:25.361182569Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 05:15:25.362176 containerd[2005]: time="2025-07-15T05:15:25.361214965Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:15:25.362176 containerd[2005]: time="2025-07-15T05:15:25.361282637Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:15:25.362176 containerd[2005]: time="2025-07-15T05:15:25.361298175Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:15:25.362176 containerd[2005]: time="2025-07-15T05:15:25.361625241Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:15:25.362176 containerd[2005]: time="2025-07-15T05:15:25.361650258Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:15:25.365067 containerd[2005]: time="2025-07-15T05:15:25.361666091Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:15:25.365067 containerd[2005]: time="2025-07-15T05:15:25.363928013Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 05:15:25.365067 containerd[2005]: time="2025-07-15T05:15:25.364079005Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 05:15:25.365067 containerd[2005]: time="2025-07-15T05:15:25.364351036Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:15:25.365067 containerd[2005]: time="2025-07-15T05:15:25.364390891Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:15:25.365067 containerd[2005]: time="2025-07-15T05:15:25.364405914Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 05:15:25.365067 containerd[2005]: time="2025-07-15T05:15:25.364456875Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 05:15:25.365067 containerd[2005]: time="2025-07-15T05:15:25.364817616Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 05:15:25.365067 containerd[2005]: time="2025-07-15T05:15:25.364893066Z" level=info msg="metadata content store policy set" policy=shared Jul 15 05:15:25.376218 containerd[2005]: time="2025-07-15T05:15:25.376166540Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 05:15:25.376450 containerd[2005]: time="2025-07-15T05:15:25.376417838Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 05:15:25.376595 containerd[2005]: time="2025-07-15T05:15:25.376577871Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 05:15:25.377422 containerd[2005]: time="2025-07-15T05:15:25.376657895Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 05:15:25.377549 containerd[2005]: time="2025-07-15T05:15:25.377527527Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 05:15:25.377644 containerd[2005]: time="2025-07-15T05:15:25.377629351Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 05:15:25.379871 containerd[2005]: time="2025-07-15T05:15:25.379839095Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 05:15:25.380079 containerd[2005]: time="2025-07-15T05:15:25.380052817Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 05:15:25.380451 containerd[2005]: time="2025-07-15T05:15:25.380427683Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 05:15:25.380538 containerd[2005]: time="2025-07-15T05:15:25.380524339Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 05:15:25.380645 containerd[2005]: time="2025-07-15T05:15:25.380630930Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 05:15:25.380736 containerd[2005]: time="2025-07-15T05:15:25.380722344Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.380948033Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.380988227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381018001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381039623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381061108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381077796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381099724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381121119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381144752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381161931Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381181884Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381278483Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381298165Z" level=info msg="Start snapshots syncer" Jul 15 05:15:25.382268 containerd[2005]: time="2025-07-15T05:15:25.381328500Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 05:15:25.388944 containerd[2005]: time="2025-07-15T05:15:25.385935637Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 05:15:25.388944 containerd[2005]: time="2025-07-15T05:15:25.386027411Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386171824Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386345775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386385423Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386404646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386426630Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386451263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386471862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386490043Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386530869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386552431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386574115Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386621076Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386643936Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:15:25.389191 containerd[2005]: time="2025-07-15T05:15:25.386665052Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:15:25.389715 containerd[2005]: time="2025-07-15T05:15:25.386703212Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:15:25.389715 containerd[2005]: time="2025-07-15T05:15:25.386717570Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 05:15:25.389715 containerd[2005]: time="2025-07-15T05:15:25.386736740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 05:15:25.389715 containerd[2005]: time="2025-07-15T05:15:25.386760058Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 05:15:25.389715 containerd[2005]: time="2025-07-15T05:15:25.386789626Z" level=info msg="runtime interface created" Jul 15 05:15:25.389715 containerd[2005]: time="2025-07-15T05:15:25.386798534Z" level=info msg="created NRI interface" Jul 15 05:15:25.389715 containerd[2005]: time="2025-07-15T05:15:25.386811994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 05:15:25.389715 containerd[2005]: time="2025-07-15T05:15:25.386836324Z" level=info msg="Connect containerd service" Jul 15 05:15:25.389715 containerd[2005]: time="2025-07-15T05:15:25.386883198Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 05:15:25.394818 containerd[2005]: time="2025-07-15T05:15:25.394320572Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:15:25.425017 sshd_keygen[2020]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 05:15:25.426514 amazon-ssm-agent[2168]: Initializing new seelog logger Jul 15 05:15:25.431510 amazon-ssm-agent[2168]: New Seelog Logger Creation Complete Jul 15 05:15:25.431510 amazon-ssm-agent[2168]: 2025/07/15 05:15:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.431510 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.432948 amazon-ssm-agent[2168]: 2025/07/15 05:15:25 processing appconfig overrides Jul 15 05:15:25.435167 amazon-ssm-agent[2168]: 2025/07/15 05:15:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.435167 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.435167 amazon-ssm-agent[2168]: 2025/07/15 05:15:25 processing appconfig overrides Jul 15 05:15:25.435167 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.4341 INFO Proxy environment variables: Jul 15 05:15:25.435167 amazon-ssm-agent[2168]: 2025/07/15 05:15:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.435167 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.435813 amazon-ssm-agent[2168]: 2025/07/15 05:15:25 processing appconfig overrides Jul 15 05:15:25.440573 amazon-ssm-agent[2168]: 2025/07/15 05:15:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.440995 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:25.441205 amazon-ssm-agent[2168]: 2025/07/15 05:15:25 processing appconfig overrides Jul 15 05:15:25.469337 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 05:15:25.473229 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 05:15:25.477036 systemd[1]: Started sshd@0-172.31.23.74:22-139.178.89.65:51646.service - OpenSSH per-connection server daemon (139.178.89.65:51646). Jul 15 05:15:25.523556 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 05:15:25.523892 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 05:15:25.529488 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 05:15:25.538839 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.4341 INFO no_proxy: Jul 15 05:15:25.574476 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 05:15:25.586032 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 05:15:25.591329 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 05:15:25.592299 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 05:15:25.615963 tar[1991]: linux-amd64/LICENSE Jul 15 05:15:25.616353 tar[1991]: linux-amd64/README.md Jul 15 05:15:25.651904 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 05:15:25.655601 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.4341 INFO https_proxy: Jul 15 05:15:25.753845 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.4341 INFO http_proxy: Jul 15 05:15:25.849272 sshd[2208]: Accepted publickey for core from 139.178.89.65 port 51646 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:25.853059 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.4343 INFO Checking if agent identity type OnPrem can be assumed Jul 15 05:15:25.855758 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:25.882502 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 05:15:25.886248 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 05:15:25.899211 containerd[2005]: time="2025-07-15T05:15:25.899172627Z" level=info msg="Start subscribing containerd event" Jul 15 05:15:25.899565 containerd[2005]: time="2025-07-15T05:15:25.899328790Z" level=info msg="Start recovering state" Jul 15 05:15:25.900987 containerd[2005]: time="2025-07-15T05:15:25.900960533Z" level=info msg="Start event monitor" Jul 15 05:15:25.901696 containerd[2005]: time="2025-07-15T05:15:25.901058444Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 05:15:25.901696 containerd[2005]: time="2025-07-15T05:15:25.901085467Z" level=info msg="Start cni network conf syncer for default" Jul 15 05:15:25.901696 containerd[2005]: time="2025-07-15T05:15:25.901272588Z" level=info msg="Start streaming server" Jul 15 05:15:25.901696 containerd[2005]: time="2025-07-15T05:15:25.901285458Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 05:15:25.901696 containerd[2005]: time="2025-07-15T05:15:25.901295624Z" level=info msg="runtime interface starting up..." Jul 15 05:15:25.903761 containerd[2005]: time="2025-07-15T05:15:25.903725736Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 05:15:25.903835 containerd[2005]: time="2025-07-15T05:15:25.903782865Z" level=info msg="starting plugins..." Jul 15 05:15:25.903835 containerd[2005]: time="2025-07-15T05:15:25.903814232Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 05:15:25.904212 containerd[2005]: time="2025-07-15T05:15:25.904188908Z" level=info msg="containerd successfully booted in 0.626151s" Jul 15 05:15:25.904952 systemd-logind[1981]: New session 1 of user core. Jul 15 05:15:25.906384 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 05:15:25.921989 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 05:15:25.931316 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 05:15:25.952238 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.4345 INFO Checking if agent identity type EC2 can be assumed Jul 15 05:15:25.954548 (systemd)[2235]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 05:15:25.957814 systemd-logind[1981]: New session c1 of user core. Jul 15 05:15:26.052582 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.5732 INFO Agent will take identity from EC2 Jul 15 05:15:26.152018 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.5761 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 15 05:15:26.194801 systemd[2235]: Queued start job for default target default.target. Jul 15 05:15:26.195468 amazon-ssm-agent[2168]: 2025/07/15 05:15:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:26.195468 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:26.195614 amazon-ssm-agent[2168]: 2025/07/15 05:15:26 processing appconfig overrides Jul 15 05:15:26.200059 systemd[2235]: Created slice app.slice - User Application Slice. Jul 15 05:15:26.200221 systemd[2235]: Reached target paths.target - Paths. Jul 15 05:15:26.200277 systemd[2235]: Reached target timers.target - Timers. Jul 15 05:15:26.201795 systemd[2235]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 05:15:26.214759 systemd[2235]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 05:15:26.214899 systemd[2235]: Reached target sockets.target - Sockets. Jul 15 05:15:26.214957 systemd[2235]: Reached target basic.target - Basic System. Jul 15 05:15:26.214994 systemd[2235]: Reached target default.target - Main User Target. Jul 15 05:15:26.215027 systemd[2235]: Startup finished in 244ms. Jul 15 05:15:26.215146 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 05:15:26.221103 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 05:15:26.222071 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.5761 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.5761 INFO [amazon-ssm-agent] Starting Core Agent Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.5768 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.5768 INFO [Registrar] Starting registrar module Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.5878 INFO [EC2Identity] Checking disk for registration info Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.5879 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:25.5879 INFO [EC2Identity] Generating registration keypair Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:26.1536 INFO [EC2Identity] Checking write access before registering Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:26.1541 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:26.1952 INFO [EC2Identity] EC2 registration was successful. Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:26.1953 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:26.1953 INFO [CredentialRefresher] credentialRefresher has started Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:26.1954 INFO [CredentialRefresher] Starting credentials refresher loop Jul 15 05:15:26.222766 amazon-ssm-agent[2168]: 2025-07-15 05:15:26.2213 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 15 05:15:26.223653 amazon-ssm-agent[2168]: 2025-07-15 05:15:26.2219 INFO [CredentialRefresher] Credentials ready Jul 15 05:15:26.252152 amazon-ssm-agent[2168]: 2025-07-15 05:15:26.2233 INFO [CredentialRefresher] Next credential rotation will be in 29.999967966633335 minutes Jul 15 05:15:26.363844 systemd[1]: Started sshd@1-172.31.23.74:22-139.178.89.65:51658.service - OpenSSH per-connection server daemon (139.178.89.65:51658). Jul 15 05:15:26.531787 sshd[2246]: Accepted publickey for core from 139.178.89.65 port 51658 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:26.533095 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:26.538598 systemd-logind[1981]: New session 2 of user core. Jul 15 05:15:26.542882 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 05:15:26.664924 sshd[2249]: Connection closed by 139.178.89.65 port 51658 Jul 15 05:15:26.665481 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:26.668873 systemd[1]: sshd@1-172.31.23.74:22-139.178.89.65:51658.service: Deactivated successfully. Jul 15 05:15:26.670557 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 05:15:26.672441 systemd-logind[1981]: Session 2 logged out. Waiting for processes to exit. Jul 15 05:15:26.673345 systemd-logind[1981]: Removed session 2. Jul 15 05:15:26.701151 systemd[1]: Started sshd@2-172.31.23.74:22-139.178.89.65:51666.service - OpenSSH per-connection server daemon (139.178.89.65:51666). Jul 15 05:15:26.867462 sshd[2255]: Accepted publickey for core from 139.178.89.65 port 51666 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:26.869007 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:26.875392 systemd-logind[1981]: New session 3 of user core. Jul 15 05:15:26.881904 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 05:15:27.000960 sshd[2258]: Connection closed by 139.178.89.65 port 51666 Jul 15 05:15:27.001984 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:27.006259 systemd[1]: sshd@2-172.31.23.74:22-139.178.89.65:51666.service: Deactivated successfully. Jul 15 05:15:27.008455 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 05:15:27.009455 systemd-logind[1981]: Session 3 logged out. Waiting for processes to exit. Jul 15 05:15:27.011133 systemd-logind[1981]: Removed session 3. Jul 15 05:15:27.236731 ntpd[1973]: Listen normally on 6 eth0 [fe80::493:e7ff:fed6:a0d5%2]:123 Jul 15 05:15:27.237107 ntpd[1973]: 15 Jul 05:15:27 ntpd[1973]: Listen normally on 6 eth0 [fe80::493:e7ff:fed6:a0d5%2]:123 Jul 15 05:15:27.243661 amazon-ssm-agent[2168]: 2025-07-15 05:15:27.2404 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 15 05:15:27.344095 amazon-ssm-agent[2168]: 2025-07-15 05:15:27.2435 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2265) started Jul 15 05:15:27.445109 amazon-ssm-agent[2168]: 2025-07-15 05:15:27.2435 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 15 05:15:28.849396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:28.850992 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 05:15:28.854714 systemd[1]: Startup finished in 2.810s (kernel) + 9.429s (initrd) + 8.823s (userspace) = 21.064s. Jul 15 05:15:28.861629 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:15:30.872445 kubelet[2282]: E0715 05:15:30.872362 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:15:30.874950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:15:30.875107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:15:30.875548 systemd[1]: kubelet.service: Consumed 1.045s CPU time, 264.1M memory peak. Jul 15 05:15:31.828115 systemd-resolved[1921]: Clock change detected. Flushing caches. Jul 15 05:15:37.629164 systemd[1]: Started sshd@3-172.31.23.74:22-139.178.89.65:52534.service - OpenSSH per-connection server daemon (139.178.89.65:52534). Jul 15 05:15:37.797213 sshd[2294]: Accepted publickey for core from 139.178.89.65 port 52534 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:37.798670 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:37.805039 systemd-logind[1981]: New session 4 of user core. Jul 15 05:15:37.815139 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 05:15:37.933500 sshd[2297]: Connection closed by 139.178.89.65 port 52534 Jul 15 05:15:37.934311 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:37.938614 systemd[1]: sshd@3-172.31.23.74:22-139.178.89.65:52534.service: Deactivated successfully. Jul 15 05:15:37.940455 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 05:15:37.941326 systemd-logind[1981]: Session 4 logged out. Waiting for processes to exit. Jul 15 05:15:37.942734 systemd-logind[1981]: Removed session 4. Jul 15 05:15:37.965953 systemd[1]: Started sshd@4-172.31.23.74:22-139.178.89.65:52538.service - OpenSSH per-connection server daemon (139.178.89.65:52538). Jul 15 05:15:38.135937 sshd[2303]: Accepted publickey for core from 139.178.89.65 port 52538 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:38.137427 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:38.143156 systemd-logind[1981]: New session 5 of user core. Jul 15 05:15:38.152148 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 05:15:38.264370 sshd[2306]: Connection closed by 139.178.89.65 port 52538 Jul 15 05:15:38.265108 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:38.269316 systemd[1]: sshd@4-172.31.23.74:22-139.178.89.65:52538.service: Deactivated successfully. Jul 15 05:15:38.271088 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 05:15:38.271813 systemd-logind[1981]: Session 5 logged out. Waiting for processes to exit. Jul 15 05:15:38.273284 systemd-logind[1981]: Removed session 5. Jul 15 05:15:38.296543 systemd[1]: Started sshd@5-172.31.23.74:22-139.178.89.65:52544.service - OpenSSH per-connection server daemon (139.178.89.65:52544). Jul 15 05:15:38.457612 sshd[2312]: Accepted publickey for core from 139.178.89.65 port 52544 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:38.459021 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:38.464099 systemd-logind[1981]: New session 6 of user core. Jul 15 05:15:38.474114 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 05:15:38.589861 sshd[2315]: Connection closed by 139.178.89.65 port 52544 Jul 15 05:15:38.590682 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:38.594009 systemd[1]: sshd@5-172.31.23.74:22-139.178.89.65:52544.service: Deactivated successfully. Jul 15 05:15:38.595559 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 05:15:38.597429 systemd-logind[1981]: Session 6 logged out. Waiting for processes to exit. Jul 15 05:15:38.598529 systemd-logind[1981]: Removed session 6. Jul 15 05:15:38.620834 systemd[1]: Started sshd@6-172.31.23.74:22-139.178.89.65:52558.service - OpenSSH per-connection server daemon (139.178.89.65:52558). Jul 15 05:15:38.785867 sshd[2321]: Accepted publickey for core from 139.178.89.65 port 52558 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:38.787184 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:38.793144 systemd-logind[1981]: New session 7 of user core. Jul 15 05:15:38.798078 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 05:15:38.932213 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 05:15:38.932481 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:38.949159 sudo[2325]: pam_unix(sudo:session): session closed for user root Jul 15 05:15:38.971565 sshd[2324]: Connection closed by 139.178.89.65 port 52558 Jul 15 05:15:38.972269 sshd-session[2321]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:38.976119 systemd[1]: sshd@6-172.31.23.74:22-139.178.89.65:52558.service: Deactivated successfully. Jul 15 05:15:38.977775 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 05:15:38.979546 systemd-logind[1981]: Session 7 logged out. Waiting for processes to exit. Jul 15 05:15:38.980516 systemd-logind[1981]: Removed session 7. Jul 15 05:15:39.015872 systemd[1]: Started sshd@7-172.31.23.74:22-139.178.89.65:41692.service - OpenSSH per-connection server daemon (139.178.89.65:41692). Jul 15 05:15:39.186459 sshd[2331]: Accepted publickey for core from 139.178.89.65 port 41692 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:39.187743 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:39.194560 systemd-logind[1981]: New session 8 of user core. Jul 15 05:15:39.198081 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 05:15:39.299623 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 05:15:39.299898 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:39.305422 sudo[2336]: pam_unix(sudo:session): session closed for user root Jul 15 05:15:39.311165 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 05:15:39.311450 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:39.321927 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:15:39.367085 augenrules[2358]: No rules Jul 15 05:15:39.368237 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:15:39.368456 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:15:39.369677 sudo[2335]: pam_unix(sudo:session): session closed for user root Jul 15 05:15:39.392599 sshd[2334]: Connection closed by 139.178.89.65 port 41692 Jul 15 05:15:39.393255 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:39.397288 systemd[1]: sshd@7-172.31.23.74:22-139.178.89.65:41692.service: Deactivated successfully. Jul 15 05:15:39.399093 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 05:15:39.399835 systemd-logind[1981]: Session 8 logged out. Waiting for processes to exit. Jul 15 05:15:39.401503 systemd-logind[1981]: Removed session 8. Jul 15 05:15:39.423510 systemd[1]: Started sshd@8-172.31.23.74:22-139.178.89.65:41698.service - OpenSSH per-connection server daemon (139.178.89.65:41698). Jul 15 05:15:39.601552 sshd[2367]: Accepted publickey for core from 139.178.89.65 port 41698 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:39.602940 sshd-session[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:39.609154 systemd-logind[1981]: New session 9 of user core. Jul 15 05:15:39.618154 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 05:15:39.715637 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 05:15:39.715917 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:40.390233 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 05:15:40.405363 (dockerd)[2389]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 05:15:40.863178 dockerd[2389]: time="2025-07-15T05:15:40.862556354Z" level=info msg="Starting up" Jul 15 05:15:40.864263 dockerd[2389]: time="2025-07-15T05:15:40.864225219Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 05:15:40.877460 dockerd[2389]: time="2025-07-15T05:15:40.877411889Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 05:15:40.896269 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1387131840-merged.mount: Deactivated successfully. Jul 15 05:15:40.931903 dockerd[2389]: time="2025-07-15T05:15:40.931686058Z" level=info msg="Loading containers: start." Jul 15 05:15:40.942932 kernel: Initializing XFRM netlink socket Jul 15 05:15:41.301614 (udev-worker)[2410]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:15:41.353444 systemd-networkd[1721]: docker0: Link UP Jul 15 05:15:41.357813 dockerd[2389]: time="2025-07-15T05:15:41.357768259Z" level=info msg="Loading containers: done." Jul 15 05:15:41.375981 dockerd[2389]: time="2025-07-15T05:15:41.375861430Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 05:15:41.376159 dockerd[2389]: time="2025-07-15T05:15:41.375993092Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 05:15:41.376159 dockerd[2389]: time="2025-07-15T05:15:41.376081128Z" level=info msg="Initializing buildkit" Jul 15 05:15:41.403701 dockerd[2389]: time="2025-07-15T05:15:41.403649728Z" level=info msg="Completed buildkit initialization" Jul 15 05:15:41.412751 dockerd[2389]: time="2025-07-15T05:15:41.412545453Z" level=info msg="Daemon has completed initialization" Jul 15 05:15:41.412751 dockerd[2389]: time="2025-07-15T05:15:41.412695516Z" level=info msg="API listen on /run/docker.sock" Jul 15 05:15:41.412995 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 05:15:41.717058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 05:15:41.718626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:41.968108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:41.979286 (kubelet)[2605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:15:42.029579 kubelet[2605]: E0715 05:15:42.029490 2605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:15:42.033911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:15:42.034104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:15:42.034543 systemd[1]: kubelet.service: Consumed 182ms CPU time, 108.4M memory peak. Jul 15 05:15:43.077515 containerd[2005]: time="2025-07-15T05:15:43.077461620Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 05:15:43.633422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941131685.mount: Deactivated successfully. Jul 15 05:15:44.900110 containerd[2005]: time="2025-07-15T05:15:44.899864373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:44.901183 containerd[2005]: time="2025-07-15T05:15:44.901114755Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 15 05:15:44.903624 containerd[2005]: time="2025-07-15T05:15:44.903587141Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:44.907149 containerd[2005]: time="2025-07-15T05:15:44.907098720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:44.908446 containerd[2005]: time="2025-07-15T05:15:44.907927305Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.830429222s" Jul 15 05:15:44.908446 containerd[2005]: time="2025-07-15T05:15:44.907967534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 15 05:15:44.908944 containerd[2005]: time="2025-07-15T05:15:44.908887980Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 05:15:46.460801 containerd[2005]: time="2025-07-15T05:15:46.460745885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:46.462328 containerd[2005]: time="2025-07-15T05:15:46.462277902Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 15 05:15:46.464403 containerd[2005]: time="2025-07-15T05:15:46.464338229Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:46.468105 containerd[2005]: time="2025-07-15T05:15:46.468040665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:46.469394 containerd[2005]: time="2025-07-15T05:15:46.469226633Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.560234775s" Jul 15 05:15:46.469394 containerd[2005]: time="2025-07-15T05:15:46.469270641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 15 05:15:46.470054 containerd[2005]: time="2025-07-15T05:15:46.469990194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 05:15:47.799034 containerd[2005]: time="2025-07-15T05:15:47.798958433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:47.802427 containerd[2005]: time="2025-07-15T05:15:47.802369494Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 15 05:15:47.805947 containerd[2005]: time="2025-07-15T05:15:47.805869534Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:47.810570 containerd[2005]: time="2025-07-15T05:15:47.810503027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:47.811696 containerd[2005]: time="2025-07-15T05:15:47.811517272Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.341491525s" Jul 15 05:15:47.811696 containerd[2005]: time="2025-07-15T05:15:47.811557341Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 15 05:15:47.812274 containerd[2005]: time="2025-07-15T05:15:47.812231639Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 05:15:48.876774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438124176.mount: Deactivated successfully. Jul 15 05:15:49.430993 containerd[2005]: time="2025-07-15T05:15:49.430933710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:49.432060 containerd[2005]: time="2025-07-15T05:15:49.432009439Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 15 05:15:49.433559 containerd[2005]: time="2025-07-15T05:15:49.433502826Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:49.435689 containerd[2005]: time="2025-07-15T05:15:49.435628756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:49.436419 containerd[2005]: time="2025-07-15T05:15:49.436227067Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.62384039s" Jul 15 05:15:49.436419 containerd[2005]: time="2025-07-15T05:15:49.436263741Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 15 05:15:49.437104 containerd[2005]: time="2025-07-15T05:15:49.437065632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 05:15:49.936613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3247176589.mount: Deactivated successfully. Jul 15 05:15:50.774734 containerd[2005]: time="2025-07-15T05:15:50.774684980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:50.778904 containerd[2005]: time="2025-07-15T05:15:50.778070936Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:50.778904 containerd[2005]: time="2025-07-15T05:15:50.778131355Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 15 05:15:50.781922 containerd[2005]: time="2025-07-15T05:15:50.781868353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:50.782503 containerd[2005]: time="2025-07-15T05:15:50.782471462Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.345372158s" Jul 15 05:15:50.782567 containerd[2005]: time="2025-07-15T05:15:50.782509152Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 05:15:50.783022 containerd[2005]: time="2025-07-15T05:15:50.782997910Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 05:15:51.239737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215926232.mount: Deactivated successfully. Jul 15 05:15:51.247273 containerd[2005]: time="2025-07-15T05:15:51.247215732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:15:51.248103 containerd[2005]: time="2025-07-15T05:15:51.248063126Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 05:15:51.249172 containerd[2005]: time="2025-07-15T05:15:51.249096756Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:15:51.251372 containerd[2005]: time="2025-07-15T05:15:51.251312373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:15:51.252149 containerd[2005]: time="2025-07-15T05:15:51.251985739Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 468.954579ms" Jul 15 05:15:51.252149 containerd[2005]: time="2025-07-15T05:15:51.252022766Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 05:15:51.252845 containerd[2005]: time="2025-07-15T05:15:51.252730001Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 05:15:51.769423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3874400626.mount: Deactivated successfully. Jul 15 05:15:52.243331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 05:15:52.245302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:52.465559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:52.475301 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:15:52.611905 kubelet[2797]: E0715 05:15:52.611648 2797 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:15:52.616836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:15:52.617065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:15:52.618354 systemd[1]: kubelet.service: Consumed 191ms CPU time, 106.4M memory peak. Jul 15 05:15:53.928399 containerd[2005]: time="2025-07-15T05:15:53.928336920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:53.930317 containerd[2005]: time="2025-07-15T05:15:53.930268469Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 15 05:15:53.932861 containerd[2005]: time="2025-07-15T05:15:53.932802898Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:53.941714 containerd[2005]: time="2025-07-15T05:15:53.941626138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:53.943316 containerd[2005]: time="2025-07-15T05:15:53.943091998Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.690328952s" Jul 15 05:15:53.943316 containerd[2005]: time="2025-07-15T05:15:53.943141391Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 05:15:55.938482 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 05:15:56.883621 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:56.883990 systemd[1]: kubelet.service: Consumed 191ms CPU time, 106.4M memory peak. Jul 15 05:15:56.886583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:56.918303 systemd[1]: Reload requested from client PID 2841 ('systemctl') (unit session-9.scope)... Jul 15 05:15:56.918366 systemd[1]: Reloading... Jul 15 05:15:57.054917 zram_generator::config[2881]: No configuration found. Jul 15 05:15:57.208584 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:15:57.346912 systemd[1]: Reloading finished in 428 ms. Jul 15 05:15:57.402455 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 05:15:57.402541 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 05:15:57.402849 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:57.402917 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98M memory peak. Jul 15 05:15:57.404589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:57.719401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:57.730320 (kubelet)[2948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:15:57.779822 kubelet[2948]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:15:57.779822 kubelet[2948]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 05:15:57.779822 kubelet[2948]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:15:57.780306 kubelet[2948]: I0715 05:15:57.779905 2948 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:15:58.138748 kubelet[2948]: I0715 05:15:58.138706 2948 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 05:15:58.139579 kubelet[2948]: I0715 05:15:58.138971 2948 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:15:58.139579 kubelet[2948]: I0715 05:15:58.139361 2948 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 05:15:58.171416 kubelet[2948]: I0715 05:15:58.171375 2948 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:15:58.178847 kubelet[2948]: E0715 05:15:58.178614 2948 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:58.185927 kubelet[2948]: I0715 05:15:58.185872 2948 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:15:58.192363 kubelet[2948]: I0715 05:15:58.192334 2948 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:15:58.195632 kubelet[2948]: I0715 05:15:58.195589 2948 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 05:15:58.195852 kubelet[2948]: I0715 05:15:58.195816 2948 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:15:58.196079 kubelet[2948]: I0715 05:15:58.195849 2948 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:15:58.196238 kubelet[2948]: I0715 05:15:58.196089 2948 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:15:58.196238 kubelet[2948]: I0715 05:15:58.196102 2948 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 05:15:58.197000 kubelet[2948]: I0715 05:15:58.196968 2948 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:15:58.205717 kubelet[2948]: I0715 05:15:58.205669 2948 kubelet.go:408] "Attempting to sync node with API server" Jul 15 05:15:58.205717 kubelet[2948]: I0715 05:15:58.205713 2948 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:15:58.205995 kubelet[2948]: I0715 05:15:58.205754 2948 kubelet.go:314] "Adding apiserver pod source" Jul 15 05:15:58.205995 kubelet[2948]: I0715 05:15:58.205776 2948 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:15:58.211381 kubelet[2948]: W0715 05:15:58.211136 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-74&limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:15:58.211381 kubelet[2948]: E0715 05:15:58.211199 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-74&limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:58.212789 kubelet[2948]: W0715 05:15:58.212445 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:15:58.212789 kubelet[2948]: E0715 05:15:58.212608 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:58.212789 kubelet[2948]: I0715 05:15:58.212689 2948 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:15:58.225214 kubelet[2948]: I0715 05:15:58.225137 2948 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:15:58.226779 kubelet[2948]: W0715 05:15:58.226203 2948 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 05:15:58.228348 kubelet[2948]: I0715 05:15:58.228326 2948 server.go:1274] "Started kubelet" Jul 15 05:15:58.234848 kubelet[2948]: I0715 05:15:58.234805 2948 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:15:58.241998 kubelet[2948]: I0715 05:15:58.241945 2948 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:15:58.242729 kubelet[2948]: E0715 05:15:58.240132 2948 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.74:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.74:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-74.185254e77dfa9a66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-74,UID:ip-172-31-23-74,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-74,},FirstTimestamp:2025-07-15 05:15:58.228298342 +0000 UTC m=+0.494247998,LastTimestamp:2025-07-15 05:15:58.228298342 +0000 UTC m=+0.494247998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-74,}" Jul 15 05:15:58.243200 kubelet[2948]: I0715 05:15:58.243131 2948 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:15:58.243503 kubelet[2948]: I0715 05:15:58.243484 2948 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:15:58.244259 kubelet[2948]: I0715 05:15:58.243777 2948 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:15:58.244259 kubelet[2948]: I0715 05:15:58.243994 2948 server.go:449] "Adding debug handlers to kubelet server" Jul 15 05:15:58.247406 kubelet[2948]: I0715 05:15:58.247326 2948 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 05:15:58.247817 kubelet[2948]: E0715 05:15:58.247800 2948 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-74\" not found" Jul 15 05:15:58.250089 kubelet[2948]: I0715 05:15:58.250064 2948 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 05:15:58.250255 kubelet[2948]: E0715 05:15:58.250216 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-74?timeout=10s\": dial tcp 172.31.23.74:6443: connect: connection refused" interval="200ms" Jul 15 05:15:58.250485 kubelet[2948]: I0715 05:15:58.250460 2948 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:15:58.250568 kubelet[2948]: I0715 05:15:58.250551 2948 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:15:58.253264 kubelet[2948]: I0715 05:15:58.253129 2948 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:15:58.257790 kubelet[2948]: I0715 05:15:58.257767 2948 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:15:58.258549 kubelet[2948]: W0715 05:15:58.258489 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:15:58.258689 kubelet[2948]: E0715 05:15:58.258673 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:58.261595 kubelet[2948]: E0715 05:15:58.261551 2948 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:15:58.279472 kubelet[2948]: I0715 05:15:58.277641 2948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:15:58.280225 kubelet[2948]: I0715 05:15:58.280190 2948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:15:58.280838 kubelet[2948]: I0715 05:15:58.280820 2948 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 05:15:58.281021 kubelet[2948]: I0715 05:15:58.281008 2948 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 05:15:58.281176 kubelet[2948]: E0715 05:15:58.281144 2948 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:15:58.281437 kubelet[2948]: I0715 05:15:58.280443 2948 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 05:15:58.281561 kubelet[2948]: I0715 05:15:58.281547 2948 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 05:15:58.281661 kubelet[2948]: I0715 05:15:58.281650 2948 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:15:58.284596 kubelet[2948]: W0715 05:15:58.284543 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:15:58.285256 kubelet[2948]: E0715 05:15:58.284676 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:58.286263 kubelet[2948]: I0715 05:15:58.286246 2948 policy_none.go:49] "None policy: Start" Jul 15 05:15:58.287235 kubelet[2948]: I0715 05:15:58.287220 2948 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 05:15:58.287351 kubelet[2948]: I0715 05:15:58.287342 2948 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:15:58.316615 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 05:15:58.330755 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 05:15:58.335778 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 05:15:58.346004 kubelet[2948]: I0715 05:15:58.345970 2948 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:15:58.346252 kubelet[2948]: I0715 05:15:58.346158 2948 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:15:58.346252 kubelet[2948]: I0715 05:15:58.346171 2948 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:15:58.349935 kubelet[2948]: I0715 05:15:58.348349 2948 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:15:58.352257 kubelet[2948]: E0715 05:15:58.352224 2948 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-74\" not found" Jul 15 05:15:58.392859 systemd[1]: Created slice kubepods-burstable-pod9add0c532e31f906beaad0c3ac14ba82.slice - libcontainer container kubepods-burstable-pod9add0c532e31f906beaad0c3ac14ba82.slice. Jul 15 05:15:58.406038 systemd[1]: Created slice kubepods-burstable-pod986764d0b15cdc3ee1b144195197daea.slice - libcontainer container kubepods-burstable-pod986764d0b15cdc3ee1b144195197daea.slice. Jul 15 05:15:58.411927 systemd[1]: Created slice kubepods-burstable-pod3381a0186e27ed59188f84c0da40532c.slice - libcontainer container kubepods-burstable-pod3381a0186e27ed59188f84c0da40532c.slice. Jul 15 05:15:58.448809 kubelet[2948]: I0715 05:15:58.448757 2948 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-74" Jul 15 05:15:58.449121 kubelet[2948]: E0715 05:15:58.449098 2948 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.74:6443/api/v1/nodes\": dial tcp 172.31.23.74:6443: connect: connection refused" node="ip-172-31-23-74" Jul 15 05:15:58.450635 kubelet[2948]: E0715 05:15:58.450598 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-74?timeout=10s\": dial tcp 172.31.23.74:6443: connect: connection refused" interval="400ms" Jul 15 05:15:58.459825 kubelet[2948]: I0715 05:15:58.459792 2948 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9add0c532e31f906beaad0c3ac14ba82-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-74\" (UID: \"9add0c532e31f906beaad0c3ac14ba82\") " pod="kube-system/kube-apiserver-ip-172-31-23-74" Jul 15 05:15:58.459961 kubelet[2948]: I0715 05:15:58.459932 2948 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/986764d0b15cdc3ee1b144195197daea-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-74\" (UID: \"986764d0b15cdc3ee1b144195197daea\") " pod="kube-system/kube-controller-manager-ip-172-31-23-74" Jul 15 05:15:58.460006 kubelet[2948]: I0715 05:15:58.459963 2948 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/986764d0b15cdc3ee1b144195197daea-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-74\" (UID: \"986764d0b15cdc3ee1b144195197daea\") " pod="kube-system/kube-controller-manager-ip-172-31-23-74" Jul 15 05:15:58.460006 kubelet[2948]: I0715 05:15:58.459993 2948 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/986764d0b15cdc3ee1b144195197daea-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-74\" (UID: \"986764d0b15cdc3ee1b144195197daea\") " pod="kube-system/kube-controller-manager-ip-172-31-23-74" Jul 15 05:15:58.460058 kubelet[2948]: I0715 05:15:58.460009 2948 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/986764d0b15cdc3ee1b144195197daea-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-74\" (UID: \"986764d0b15cdc3ee1b144195197daea\") " pod="kube-system/kube-controller-manager-ip-172-31-23-74" Jul 15 05:15:58.460058 kubelet[2948]: I0715 05:15:58.460029 2948 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9add0c532e31f906beaad0c3ac14ba82-ca-certs\") pod \"kube-apiserver-ip-172-31-23-74\" (UID: \"9add0c532e31f906beaad0c3ac14ba82\") " pod="kube-system/kube-apiserver-ip-172-31-23-74" Jul 15 05:15:58.460110 kubelet[2948]: I0715 05:15:58.460055 2948 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9add0c532e31f906beaad0c3ac14ba82-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-74\" (UID: \"9add0c532e31f906beaad0c3ac14ba82\") " pod="kube-system/kube-apiserver-ip-172-31-23-74" Jul 15 05:15:58.460110 kubelet[2948]: I0715 05:15:58.460073 2948 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/986764d0b15cdc3ee1b144195197daea-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-74\" (UID: \"986764d0b15cdc3ee1b144195197daea\") " pod="kube-system/kube-controller-manager-ip-172-31-23-74" Jul 15 05:15:58.460110 kubelet[2948]: I0715 05:15:58.460090 2948 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3381a0186e27ed59188f84c0da40532c-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-74\" (UID: \"3381a0186e27ed59188f84c0da40532c\") " pod="kube-system/kube-scheduler-ip-172-31-23-74" Jul 15 05:15:58.651282 kubelet[2948]: I0715 05:15:58.651040 2948 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-74" Jul 15 05:15:58.651464 kubelet[2948]: E0715 05:15:58.651332 2948 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.74:6443/api/v1/nodes\": dial tcp 172.31.23.74:6443: connect: connection refused" node="ip-172-31-23-74" Jul 15 05:15:58.703664 containerd[2005]: time="2025-07-15T05:15:58.703615018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-74,Uid:9add0c532e31f906beaad0c3ac14ba82,Namespace:kube-system,Attempt:0,}" Jul 15 05:15:58.709693 containerd[2005]: time="2025-07-15T05:15:58.709513383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-74,Uid:986764d0b15cdc3ee1b144195197daea,Namespace:kube-system,Attempt:0,}" Jul 15 05:15:58.715562 containerd[2005]: time="2025-07-15T05:15:58.715518040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-74,Uid:3381a0186e27ed59188f84c0da40532c,Namespace:kube-system,Attempt:0,}" Jul 15 05:15:58.852428 kubelet[2948]: E0715 05:15:58.852359 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-74?timeout=10s\": dial tcp 172.31.23.74:6443: connect: connection refused" interval="800ms" Jul 15 05:15:58.875666 containerd[2005]: time="2025-07-15T05:15:58.875607774Z" level=info msg="connecting to shim a2a4876b603514bf7710db71c83a7a93c875292ed5f13131361dc07dd6c5b4ef" address="unix:///run/containerd/s/40e15e2d591947e870a2276f6e4a08d87bfba2ea7893c45a8c65453bb17c796a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:15:58.883139 containerd[2005]: time="2025-07-15T05:15:58.883041520Z" level=info msg="connecting to shim bab6212173080ada15f04f524ef07032fd1e6a814d2abdce64d4f298a0c63e67" address="unix:///run/containerd/s/97addc6be760bf91821424c1dbf001a383c7208a56aed04949c6b64d89794c56" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:15:58.891905 containerd[2005]: time="2025-07-15T05:15:58.890528110Z" level=info msg="connecting to shim 03bfa9e06d2af7a09d9ed9b8114079a83f22c3bb0f62caa81f2a7f5d8d4c6951" address="unix:///run/containerd/s/067d96221bdbdf565b09a2bb8490be3bc17c363675298ef3da2159fb4353c503" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:15:58.996117 systemd[1]: Started cri-containerd-03bfa9e06d2af7a09d9ed9b8114079a83f22c3bb0f62caa81f2a7f5d8d4c6951.scope - libcontainer container 03bfa9e06d2af7a09d9ed9b8114079a83f22c3bb0f62caa81f2a7f5d8d4c6951. Jul 15 05:15:58.997859 systemd[1]: Started cri-containerd-a2a4876b603514bf7710db71c83a7a93c875292ed5f13131361dc07dd6c5b4ef.scope - libcontainer container a2a4876b603514bf7710db71c83a7a93c875292ed5f13131361dc07dd6c5b4ef. Jul 15 05:15:59.002158 systemd[1]: Started cri-containerd-bab6212173080ada15f04f524ef07032fd1e6a814d2abdce64d4f298a0c63e67.scope - libcontainer container bab6212173080ada15f04f524ef07032fd1e6a814d2abdce64d4f298a0c63e67. Jul 15 05:15:59.058617 kubelet[2948]: I0715 05:15:59.058572 2948 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-74" Jul 15 05:15:59.060100 kubelet[2948]: E0715 05:15:59.060040 2948 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.74:6443/api/v1/nodes\": dial tcp 172.31.23.74:6443: connect: connection refused" node="ip-172-31-23-74" Jul 15 05:15:59.068978 kubelet[2948]: W0715 05:15:59.068807 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-74&limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:15:59.069421 kubelet[2948]: E0715 05:15:59.069182 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-74&limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:59.116116 containerd[2005]: time="2025-07-15T05:15:59.115994215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-74,Uid:986764d0b15cdc3ee1b144195197daea,Namespace:kube-system,Attempt:0,} returns sandbox id \"bab6212173080ada15f04f524ef07032fd1e6a814d2abdce64d4f298a0c63e67\"" Jul 15 05:15:59.124783 containerd[2005]: time="2025-07-15T05:15:59.124746135Z" level=info msg="CreateContainer within sandbox \"bab6212173080ada15f04f524ef07032fd1e6a814d2abdce64d4f298a0c63e67\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 05:15:59.143624 containerd[2005]: time="2025-07-15T05:15:59.143555287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-74,Uid:3381a0186e27ed59188f84c0da40532c,Namespace:kube-system,Attempt:0,} returns sandbox id \"03bfa9e06d2af7a09d9ed9b8114079a83f22c3bb0f62caa81f2a7f5d8d4c6951\"" Jul 15 05:15:59.146259 containerd[2005]: time="2025-07-15T05:15:59.145529454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-74,Uid:9add0c532e31f906beaad0c3ac14ba82,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2a4876b603514bf7710db71c83a7a93c875292ed5f13131361dc07dd6c5b4ef\"" Jul 15 05:15:59.147922 containerd[2005]: time="2025-07-15T05:15:59.147874649Z" level=info msg="CreateContainer within sandbox \"03bfa9e06d2af7a09d9ed9b8114079a83f22c3bb0f62caa81f2a7f5d8d4c6951\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 05:15:59.149057 containerd[2005]: time="2025-07-15T05:15:59.149032770Z" level=info msg="CreateContainer within sandbox \"a2a4876b603514bf7710db71c83a7a93c875292ed5f13131361dc07dd6c5b4ef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 05:15:59.167040 containerd[2005]: time="2025-07-15T05:15:59.167003377Z" level=info msg="Container 9e047590c3192e9c8b3874d1fd1d22bdb3891badda438e7e8e6185bc40563d7f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:15:59.167257 containerd[2005]: time="2025-07-15T05:15:59.167239518Z" level=info msg="Container eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:15:59.168298 containerd[2005]: time="2025-07-15T05:15:59.168174810Z" level=info msg="Container 4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:15:59.183613 containerd[2005]: time="2025-07-15T05:15:59.183575262Z" level=info msg="CreateContainer within sandbox \"a2a4876b603514bf7710db71c83a7a93c875292ed5f13131361dc07dd6c5b4ef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9e047590c3192e9c8b3874d1fd1d22bdb3891badda438e7e8e6185bc40563d7f\"" Jul 15 05:15:59.184711 containerd[2005]: time="2025-07-15T05:15:59.184657318Z" level=info msg="StartContainer for \"9e047590c3192e9c8b3874d1fd1d22bdb3891badda438e7e8e6185bc40563d7f\"" Jul 15 05:15:59.185796 containerd[2005]: time="2025-07-15T05:15:59.185700735Z" level=info msg="connecting to shim 9e047590c3192e9c8b3874d1fd1d22bdb3891badda438e7e8e6185bc40563d7f" address="unix:///run/containerd/s/40e15e2d591947e870a2276f6e4a08d87bfba2ea7893c45a8c65453bb17c796a" protocol=ttrpc version=3 Jul 15 05:15:59.189022 containerd[2005]: time="2025-07-15T05:15:59.188993531Z" level=info msg="CreateContainer within sandbox \"03bfa9e06d2af7a09d9ed9b8114079a83f22c3bb0f62caa81f2a7f5d8d4c6951\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c\"" Jul 15 05:15:59.190305 containerd[2005]: time="2025-07-15T05:15:59.190258361Z" level=info msg="StartContainer for \"eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c\"" Jul 15 05:15:59.191526 containerd[2005]: time="2025-07-15T05:15:59.191491181Z" level=info msg="connecting to shim eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c" address="unix:///run/containerd/s/067d96221bdbdf565b09a2bb8490be3bc17c363675298ef3da2159fb4353c503" protocol=ttrpc version=3 Jul 15 05:15:59.192360 containerd[2005]: time="2025-07-15T05:15:59.192048539Z" level=info msg="CreateContainer within sandbox \"bab6212173080ada15f04f524ef07032fd1e6a814d2abdce64d4f298a0c63e67\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c\"" Jul 15 05:15:59.193071 containerd[2005]: time="2025-07-15T05:15:59.193027522Z" level=info msg="StartContainer for \"4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c\"" Jul 15 05:15:59.194003 containerd[2005]: time="2025-07-15T05:15:59.193977955Z" level=info msg="connecting to shim 4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c" address="unix:///run/containerd/s/97addc6be760bf91821424c1dbf001a383c7208a56aed04949c6b64d89794c56" protocol=ttrpc version=3 Jul 15 05:15:59.210059 systemd[1]: Started cri-containerd-9e047590c3192e9c8b3874d1fd1d22bdb3891badda438e7e8e6185bc40563d7f.scope - libcontainer container 9e047590c3192e9c8b3874d1fd1d22bdb3891badda438e7e8e6185bc40563d7f. Jul 15 05:15:59.221136 systemd[1]: Started cri-containerd-eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c.scope - libcontainer container eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c. Jul 15 05:15:59.233133 systemd[1]: Started cri-containerd-4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c.scope - libcontainer container 4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c. Jul 15 05:15:59.263815 kubelet[2948]: W0715 05:15:59.262844 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:15:59.263815 kubelet[2948]: E0715 05:15:59.262936 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:59.330753 containerd[2005]: time="2025-07-15T05:15:59.330710968Z" level=info msg="StartContainer for \"9e047590c3192e9c8b3874d1fd1d22bdb3891badda438e7e8e6185bc40563d7f\" returns successfully" Jul 15 05:15:59.355745 containerd[2005]: time="2025-07-15T05:15:59.355692752Z" level=info msg="StartContainer for \"eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c\" returns successfully" Jul 15 05:15:59.363356 containerd[2005]: time="2025-07-15T05:15:59.363315050Z" level=info msg="StartContainer for \"4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c\" returns successfully" Jul 15 05:15:59.453102 kubelet[2948]: W0715 05:15:59.453027 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:15:59.453283 kubelet[2948]: E0715 05:15:59.453113 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:59.561326 kubelet[2948]: W0715 05:15:59.561183 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:15:59.561326 kubelet[2948]: E0715 05:15:59.561264 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:15:59.653273 kubelet[2948]: E0715 05:15:59.653218 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-74?timeout=10s\": dial tcp 172.31.23.74:6443: connect: connection refused" interval="1.6s" Jul 15 05:15:59.865626 kubelet[2948]: I0715 05:15:59.865375 2948 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-74" Jul 15 05:15:59.868210 kubelet[2948]: E0715 05:15:59.865726 2948 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.74:6443/api/v1/nodes\": dial tcp 172.31.23.74:6443: connect: connection refused" node="ip-172-31-23-74" Jul 15 05:16:00.349655 kubelet[2948]: E0715 05:16:00.349607 2948 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:16:00.915725 kubelet[2948]: W0715 05:16:00.915646 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-74&limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:16:00.915725 kubelet[2948]: E0715 05:16:00.915696 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-74&limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:16:00.934454 kubelet[2948]: W0715 05:16:00.934405 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:16:00.934629 kubelet[2948]: E0715 05:16:00.934457 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:16:01.254559 kubelet[2948]: E0715 05:16:01.254473 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-74?timeout=10s\": dial tcp 172.31.23.74:6443: connect: connection refused" interval="3.2s" Jul 15 05:16:01.469280 kubelet[2948]: I0715 05:16:01.469242 2948 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-74" Jul 15 05:16:01.469845 kubelet[2948]: E0715 05:16:01.469803 2948 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.74:6443/api/v1/nodes\": dial tcp 172.31.23.74:6443: connect: connection refused" node="ip-172-31-23-74" Jul 15 05:16:01.630364 kubelet[2948]: W0715 05:16:01.630243 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:16:01.630364 kubelet[2948]: E0715 05:16:01.630299 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:16:02.612682 kubelet[2948]: W0715 05:16:02.612627 2948 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.74:6443: connect: connection refused Jul 15 05:16:02.613541 kubelet[2948]: E0715 05:16:02.612691 2948 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:16:04.672784 kubelet[2948]: I0715 05:16:04.672754 2948 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-74" Jul 15 05:16:05.546893 kubelet[2948]: E0715 05:16:05.546831 2948 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-74\" not found" node="ip-172-31-23-74" Jul 15 05:16:05.616592 kubelet[2948]: I0715 05:16:05.616259 2948 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-74" Jul 15 05:16:05.683579 kubelet[2948]: E0715 05:16:05.683469 2948 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-74.185254e77dfa9a66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-74,UID:ip-172-31-23-74,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-74,},FirstTimestamp:2025-07-15 05:15:58.228298342 +0000 UTC m=+0.494247998,LastTimestamp:2025-07-15 05:15:58.228298342 +0000 UTC m=+0.494247998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-74,}" Jul 15 05:16:06.216632 kubelet[2948]: I0715 05:16:06.216585 2948 apiserver.go:52] "Watching apiserver" Jul 15 05:16:06.250369 kubelet[2948]: I0715 05:16:06.250326 2948 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 05:16:07.686155 systemd[1]: Reload requested from client PID 3214 ('systemctl') (unit session-9.scope)... Jul 15 05:16:07.686464 systemd[1]: Reloading... Jul 15 05:16:07.777910 zram_generator::config[3254]: No configuration found. Jul 15 05:16:07.880815 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:16:08.036430 systemd[1]: Reloading finished in 349 ms. Jul 15 05:16:08.064818 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:16:08.086040 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 05:16:08.086274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:16:08.086337 systemd[1]: kubelet.service: Consumed 940ms CPU time, 126.7M memory peak. Jul 15 05:16:08.088290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:16:08.402329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:16:08.412311 (kubelet)[3318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:16:08.490625 kubelet[3318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:16:08.490625 kubelet[3318]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 05:16:08.490625 kubelet[3318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:16:08.493181 kubelet[3318]: I0715 05:16:08.492627 3318 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:16:08.500499 kubelet[3318]: I0715 05:16:08.500325 3318 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 05:16:08.500499 kubelet[3318]: I0715 05:16:08.500354 3318 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:16:08.500744 kubelet[3318]: I0715 05:16:08.500723 3318 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 05:16:08.502168 kubelet[3318]: I0715 05:16:08.502135 3318 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 05:16:08.506491 kubelet[3318]: I0715 05:16:08.506313 3318 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:16:08.511381 kubelet[3318]: I0715 05:16:08.511361 3318 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:16:08.516906 kubelet[3318]: I0715 05:16:08.515447 3318 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:16:08.516906 kubelet[3318]: I0715 05:16:08.515593 3318 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 05:16:08.516906 kubelet[3318]: I0715 05:16:08.515727 3318 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:16:08.516906 kubelet[3318]: I0715 05:16:08.515753 3318 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:16:08.517249 kubelet[3318]: I0715 05:16:08.516179 3318 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:16:08.517249 kubelet[3318]: I0715 05:16:08.516193 3318 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 05:16:08.517249 kubelet[3318]: I0715 05:16:08.516228 3318 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:16:08.517249 kubelet[3318]: I0715 05:16:08.516354 3318 kubelet.go:408] "Attempting to sync node with API server" Jul 15 05:16:08.517249 kubelet[3318]: I0715 05:16:08.516379 3318 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:16:08.517249 kubelet[3318]: I0715 05:16:08.516418 3318 kubelet.go:314] "Adding apiserver pod source" Jul 15 05:16:08.517249 kubelet[3318]: I0715 05:16:08.516435 3318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:16:08.521113 kubelet[3318]: I0715 05:16:08.521081 3318 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:16:08.521616 kubelet[3318]: I0715 05:16:08.521602 3318 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:16:08.522061 kubelet[3318]: I0715 05:16:08.522046 3318 server.go:1274] "Started kubelet" Jul 15 05:16:08.523843 kubelet[3318]: I0715 05:16:08.523743 3318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:16:08.535965 kubelet[3318]: I0715 05:16:08.535933 3318 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:16:08.537311 kubelet[3318]: I0715 05:16:08.537284 3318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:16:08.537756 kubelet[3318]: I0715 05:16:08.537656 3318 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:16:08.539865 kubelet[3318]: I0715 05:16:08.538016 3318 server.go:449] "Adding debug handlers to kubelet server" Jul 15 05:16:08.545486 kubelet[3318]: I0715 05:16:08.538459 3318 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:16:08.545970 kubelet[3318]: E0715 05:16:08.545947 3318 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-74\" not found" Jul 15 05:16:08.546108 kubelet[3318]: I0715 05:16:08.539689 3318 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 05:16:08.547126 kubelet[3318]: I0715 05:16:08.547099 3318 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:16:08.549955 kubelet[3318]: I0715 05:16:08.539661 3318 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 05:16:08.549955 kubelet[3318]: I0715 05:16:08.549541 3318 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:16:08.552229 kubelet[3318]: E0715 05:16:08.551341 3318 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:16:08.552229 kubelet[3318]: I0715 05:16:08.551552 3318 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:16:08.552229 kubelet[3318]: I0715 05:16:08.551561 3318 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:16:08.554466 kubelet[3318]: I0715 05:16:08.554352 3318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:16:08.555712 kubelet[3318]: I0715 05:16:08.555696 3318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:16:08.556178 kubelet[3318]: I0715 05:16:08.555774 3318 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 05:16:08.556178 kubelet[3318]: I0715 05:16:08.555791 3318 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 05:16:08.556178 kubelet[3318]: E0715 05:16:08.555824 3318 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:16:08.613584 kubelet[3318]: I0715 05:16:08.613255 3318 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 05:16:08.613584 kubelet[3318]: I0715 05:16:08.613275 3318 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 05:16:08.613584 kubelet[3318]: I0715 05:16:08.613300 3318 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:16:08.613584 kubelet[3318]: I0715 05:16:08.613465 3318 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 05:16:08.613584 kubelet[3318]: I0715 05:16:08.613476 3318 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 05:16:08.613584 kubelet[3318]: I0715 05:16:08.613496 3318 policy_none.go:49] "None policy: Start" Jul 15 05:16:08.614509 kubelet[3318]: I0715 05:16:08.614494 3318 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 05:16:08.614633 kubelet[3318]: I0715 05:16:08.614623 3318 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:16:08.614941 kubelet[3318]: I0715 05:16:08.614931 3318 state_mem.go:75] "Updated machine memory state" Jul 15 05:16:08.620467 kubelet[3318]: I0715 05:16:08.620442 3318 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:16:08.620968 kubelet[3318]: I0715 05:16:08.620950 3318 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:16:08.622137 kubelet[3318]: I0715 05:16:08.621594 3318 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:16:08.622137 kubelet[3318]: I0715 05:16:08.621833 3318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:16:08.725597 kubelet[3318]: I0715 05:16:08.725484 3318 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-74" Jul 15 05:16:08.738285 kubelet[3318]: I0715 05:16:08.738156 3318 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-23-74" Jul 15 05:16:08.738285 kubelet[3318]: I0715 05:16:08.738229 3318 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-74" Jul 15 05:16:08.749741 kubelet[3318]: I0715 05:16:08.749710 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/986764d0b15cdc3ee1b144195197daea-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-74\" (UID: \"986764d0b15cdc3ee1b144195197daea\") " pod="kube-system/kube-controller-manager-ip-172-31-23-74" Jul 15 05:16:08.750033 kubelet[3318]: I0715 05:16:08.749917 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/986764d0b15cdc3ee1b144195197daea-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-74\" (UID: \"986764d0b15cdc3ee1b144195197daea\") " pod="kube-system/kube-controller-manager-ip-172-31-23-74" Jul 15 05:16:08.750033 kubelet[3318]: I0715 05:16:08.749974 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/986764d0b15cdc3ee1b144195197daea-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-74\" (UID: \"986764d0b15cdc3ee1b144195197daea\") " pod="kube-system/kube-controller-manager-ip-172-31-23-74" Jul 15 05:16:08.750033 kubelet[3318]: I0715 05:16:08.749995 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3381a0186e27ed59188f84c0da40532c-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-74\" (UID: \"3381a0186e27ed59188f84c0da40532c\") " pod="kube-system/kube-scheduler-ip-172-31-23-74" Jul 15 05:16:08.750033 kubelet[3318]: I0715 05:16:08.750010 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/986764d0b15cdc3ee1b144195197daea-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-74\" (UID: \"986764d0b15cdc3ee1b144195197daea\") " pod="kube-system/kube-controller-manager-ip-172-31-23-74" Jul 15 05:16:08.750263 kubelet[3318]: I0715 05:16:08.750140 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9add0c532e31f906beaad0c3ac14ba82-ca-certs\") pod \"kube-apiserver-ip-172-31-23-74\" (UID: \"9add0c532e31f906beaad0c3ac14ba82\") " pod="kube-system/kube-apiserver-ip-172-31-23-74" Jul 15 05:16:08.750263 kubelet[3318]: I0715 05:16:08.750157 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9add0c532e31f906beaad0c3ac14ba82-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-74\" (UID: \"9add0c532e31f906beaad0c3ac14ba82\") " pod="kube-system/kube-apiserver-ip-172-31-23-74" Jul 15 05:16:08.750420 kubelet[3318]: I0715 05:16:08.750339 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9add0c532e31f906beaad0c3ac14ba82-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-74\" (UID: \"9add0c532e31f906beaad0c3ac14ba82\") " pod="kube-system/kube-apiserver-ip-172-31-23-74" Jul 15 05:16:08.750420 kubelet[3318]: I0715 05:16:08.750361 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/986764d0b15cdc3ee1b144195197daea-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-74\" (UID: \"986764d0b15cdc3ee1b144195197daea\") " pod="kube-system/kube-controller-manager-ip-172-31-23-74" Jul 15 05:16:09.520003 kubelet[3318]: I0715 05:16:09.519956 3318 apiserver.go:52] "Watching apiserver" Jul 15 05:16:09.547027 kubelet[3318]: I0715 05:16:09.546981 3318 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 05:16:09.599334 kubelet[3318]: E0715 05:16:09.599277 3318 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-74\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-74" Jul 15 05:16:09.623614 kubelet[3318]: I0715 05:16:09.623556 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-74" podStartSLOduration=1.623538559 podStartE2EDuration="1.623538559s" podCreationTimestamp="2025-07-15 05:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:09.612586072 +0000 UTC m=+1.174781874" watchObservedRunningTime="2025-07-15 05:16:09.623538559 +0000 UTC m=+1.185734373" Jul 15 05:16:09.634994 kubelet[3318]: I0715 05:16:09.634927 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-74" podStartSLOduration=1.63487447 podStartE2EDuration="1.63487447s" podCreationTimestamp="2025-07-15 05:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:09.634812169 +0000 UTC m=+1.197007972" watchObservedRunningTime="2025-07-15 05:16:09.63487447 +0000 UTC m=+1.197070252" Jul 15 05:16:09.635163 kubelet[3318]: I0715 05:16:09.635039 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-74" podStartSLOduration=1.635033356 podStartE2EDuration="1.635033356s" podCreationTimestamp="2025-07-15 05:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:09.624009433 +0000 UTC m=+1.186205235" watchObservedRunningTime="2025-07-15 05:16:09.635033356 +0000 UTC m=+1.197229200" Jul 15 05:16:10.383017 update_engine[1984]: I20250715 05:16:10.382922 1984 update_attempter.cc:509] Updating boot flags... Jul 15 05:16:14.031133 kubelet[3318]: I0715 05:16:14.031088 3318 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 05:16:14.031512 containerd[2005]: time="2025-07-15T05:16:14.031480162Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 05:16:14.031823 kubelet[3318]: I0715 05:16:14.031705 3318 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 05:16:14.680346 systemd[1]: Created slice kubepods-besteffort-podf9e6a4d5_918a_4575_b124_6dc6a993f21d.slice - libcontainer container kubepods-besteffort-podf9e6a4d5_918a_4575_b124_6dc6a993f21d.slice. Jul 15 05:16:14.691404 kubelet[3318]: I0715 05:16:14.691288 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9e6a4d5-918a-4575-b124-6dc6a993f21d-kube-proxy\") pod \"kube-proxy-bmvkg\" (UID: \"f9e6a4d5-918a-4575-b124-6dc6a993f21d\") " pod="kube-system/kube-proxy-bmvkg" Jul 15 05:16:14.691404 kubelet[3318]: I0715 05:16:14.691321 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9e6a4d5-918a-4575-b124-6dc6a993f21d-lib-modules\") pod \"kube-proxy-bmvkg\" (UID: \"f9e6a4d5-918a-4575-b124-6dc6a993f21d\") " pod="kube-system/kube-proxy-bmvkg" Jul 15 05:16:14.691404 kubelet[3318]: I0715 05:16:14.691372 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9e6a4d5-918a-4575-b124-6dc6a993f21d-xtables-lock\") pod \"kube-proxy-bmvkg\" (UID: \"f9e6a4d5-918a-4575-b124-6dc6a993f21d\") " pod="kube-system/kube-proxy-bmvkg" Jul 15 05:16:14.691404 kubelet[3318]: I0715 05:16:14.691390 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq9xw\" (UniqueName: \"kubernetes.io/projected/f9e6a4d5-918a-4575-b124-6dc6a993f21d-kube-api-access-dq9xw\") pod \"kube-proxy-bmvkg\" (UID: \"f9e6a4d5-918a-4575-b124-6dc6a993f21d\") " pod="kube-system/kube-proxy-bmvkg" Jul 15 05:16:14.960494 systemd[1]: Created slice kubepods-besteffort-pod62223633_458f_42ba_92a3_2b64daa0e32f.slice - libcontainer container kubepods-besteffort-pod62223633_458f_42ba_92a3_2b64daa0e32f.slice. Jul 15 05:16:14.989646 containerd[2005]: time="2025-07-15T05:16:14.989605330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bmvkg,Uid:f9e6a4d5-918a-4575-b124-6dc6a993f21d,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:14.993540 kubelet[3318]: I0715 05:16:14.993502 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bxpb\" (UniqueName: \"kubernetes.io/projected/62223633-458f-42ba-92a3-2b64daa0e32f-kube-api-access-6bxpb\") pod \"tigera-operator-5bf8dfcb4-5ghjj\" (UID: \"62223633-458f-42ba-92a3-2b64daa0e32f\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-5ghjj" Jul 15 05:16:14.993667 kubelet[3318]: I0715 05:16:14.993559 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/62223633-458f-42ba-92a3-2b64daa0e32f-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-5ghjj\" (UID: \"62223633-458f-42ba-92a3-2b64daa0e32f\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-5ghjj" Jul 15 05:16:15.017134 containerd[2005]: time="2025-07-15T05:16:15.015958540Z" level=info msg="connecting to shim 0fa5a02b319b79923137ffe7df6e0bdb7ab526a4e80a0f9f21752285d9bbc81d" address="unix:///run/containerd/s/3675c3ad8674284225f4eee42747ebc9fa09ffd4dde98e1ccfc3bf09b750b6ca" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:15.052117 systemd[1]: Started cri-containerd-0fa5a02b319b79923137ffe7df6e0bdb7ab526a4e80a0f9f21752285d9bbc81d.scope - libcontainer container 0fa5a02b319b79923137ffe7df6e0bdb7ab526a4e80a0f9f21752285d9bbc81d. Jul 15 05:16:15.083372 containerd[2005]: time="2025-07-15T05:16:15.083334040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bmvkg,Uid:f9e6a4d5-918a-4575-b124-6dc6a993f21d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fa5a02b319b79923137ffe7df6e0bdb7ab526a4e80a0f9f21752285d9bbc81d\"" Jul 15 05:16:15.087369 containerd[2005]: time="2025-07-15T05:16:15.087331888Z" level=info msg="CreateContainer within sandbox \"0fa5a02b319b79923137ffe7df6e0bdb7ab526a4e80a0f9f21752285d9bbc81d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 05:16:15.133609 containerd[2005]: time="2025-07-15T05:16:15.133566608Z" level=info msg="Container abb37705f5b7e69f5e24b7eb386b887aef796e99da6464c9750e42b53107153c: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:15.142056 containerd[2005]: time="2025-07-15T05:16:15.142015538Z" level=info msg="CreateContainer within sandbox \"0fa5a02b319b79923137ffe7df6e0bdb7ab526a4e80a0f9f21752285d9bbc81d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"abb37705f5b7e69f5e24b7eb386b887aef796e99da6464c9750e42b53107153c\"" Jul 15 05:16:15.142547 containerd[2005]: time="2025-07-15T05:16:15.142519898Z" level=info msg="StartContainer for \"abb37705f5b7e69f5e24b7eb386b887aef796e99da6464c9750e42b53107153c\"" Jul 15 05:16:15.144007 containerd[2005]: time="2025-07-15T05:16:15.143950651Z" level=info msg="connecting to shim abb37705f5b7e69f5e24b7eb386b887aef796e99da6464c9750e42b53107153c" address="unix:///run/containerd/s/3675c3ad8674284225f4eee42747ebc9fa09ffd4dde98e1ccfc3bf09b750b6ca" protocol=ttrpc version=3 Jul 15 05:16:15.166151 systemd[1]: Started cri-containerd-abb37705f5b7e69f5e24b7eb386b887aef796e99da6464c9750e42b53107153c.scope - libcontainer container abb37705f5b7e69f5e24b7eb386b887aef796e99da6464c9750e42b53107153c. Jul 15 05:16:15.205602 containerd[2005]: time="2025-07-15T05:16:15.205508363Z" level=info msg="StartContainer for \"abb37705f5b7e69f5e24b7eb386b887aef796e99da6464c9750e42b53107153c\" returns successfully" Jul 15 05:16:15.265665 containerd[2005]: time="2025-07-15T05:16:15.265613684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-5ghjj,Uid:62223633-458f-42ba-92a3-2b64daa0e32f,Namespace:tigera-operator,Attempt:0,}" Jul 15 05:16:15.285968 containerd[2005]: time="2025-07-15T05:16:15.284927911Z" level=info msg="connecting to shim 1c36f497d60968b9a562459d936a4846bab3d0bdbd168d0584c9598ebc4797c4" address="unix:///run/containerd/s/9bbfa931fe9f7b095f5ddb1d738b92786d6852f32ca4595906c6878ecac115c1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:15.313124 systemd[1]: Started cri-containerd-1c36f497d60968b9a562459d936a4846bab3d0bdbd168d0584c9598ebc4797c4.scope - libcontainer container 1c36f497d60968b9a562459d936a4846bab3d0bdbd168d0584c9598ebc4797c4. Jul 15 05:16:15.359622 containerd[2005]: time="2025-07-15T05:16:15.359566785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-5ghjj,Uid:62223633-458f-42ba-92a3-2b64daa0e32f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1c36f497d60968b9a562459d936a4846bab3d0bdbd168d0584c9598ebc4797c4\"" Jul 15 05:16:15.361849 containerd[2005]: time="2025-07-15T05:16:15.361823837Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 05:16:15.622506 kubelet[3318]: I0715 05:16:15.622184 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bmvkg" podStartSLOduration=1.6221435770000001 podStartE2EDuration="1.622143577s" podCreationTimestamp="2025-07-15 05:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:15.621923404 +0000 UTC m=+7.184119206" watchObservedRunningTime="2025-07-15 05:16:15.622143577 +0000 UTC m=+7.184339381" Jul 15 05:16:15.809489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947227603.mount: Deactivated successfully. Jul 15 05:16:16.897239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383232022.mount: Deactivated successfully. Jul 15 05:16:19.317763 containerd[2005]: time="2025-07-15T05:16:19.317683592Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:19.318686 containerd[2005]: time="2025-07-15T05:16:19.318636952Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 05:16:19.319995 containerd[2005]: time="2025-07-15T05:16:19.319947968Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:19.322061 containerd[2005]: time="2025-07-15T05:16:19.322015210Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:19.322542 containerd[2005]: time="2025-07-15T05:16:19.322513659Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.960662043s" Jul 15 05:16:19.322607 containerd[2005]: time="2025-07-15T05:16:19.322547541Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 05:16:19.325980 containerd[2005]: time="2025-07-15T05:16:19.325944863Z" level=info msg="CreateContainer within sandbox \"1c36f497d60968b9a562459d936a4846bab3d0bdbd168d0584c9598ebc4797c4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 05:16:19.336798 containerd[2005]: time="2025-07-15T05:16:19.336558407Z" level=info msg="Container 0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:19.347569 containerd[2005]: time="2025-07-15T05:16:19.347485352Z" level=info msg="CreateContainer within sandbox \"1c36f497d60968b9a562459d936a4846bab3d0bdbd168d0584c9598ebc4797c4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443\"" Jul 15 05:16:19.348052 containerd[2005]: time="2025-07-15T05:16:19.348026297Z" level=info msg="StartContainer for \"0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443\"" Jul 15 05:16:19.349222 containerd[2005]: time="2025-07-15T05:16:19.349174803Z" level=info msg="connecting to shim 0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443" address="unix:///run/containerd/s/9bbfa931fe9f7b095f5ddb1d738b92786d6852f32ca4595906c6878ecac115c1" protocol=ttrpc version=3 Jul 15 05:16:19.374183 systemd[1]: Started cri-containerd-0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443.scope - libcontainer container 0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443. Jul 15 05:16:19.417202 containerd[2005]: time="2025-07-15T05:16:19.417158018Z" level=info msg="StartContainer for \"0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443\" returns successfully" Jul 15 05:16:26.183088 sudo[2371]: pam_unix(sudo:session): session closed for user root Jul 15 05:16:26.208560 sshd[2370]: Connection closed by 139.178.89.65 port 41698 Jul 15 05:16:26.209126 sshd-session[2367]: pam_unix(sshd:session): session closed for user core Jul 15 05:16:26.219334 systemd[1]: sshd@8-172.31.23.74:22-139.178.89.65:41698.service: Deactivated successfully. Jul 15 05:16:26.227374 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 05:16:26.227909 systemd[1]: session-9.scope: Consumed 5.111s CPU time, 150.2M memory peak. Jul 15 05:16:26.231104 systemd-logind[1981]: Session 9 logged out. Waiting for processes to exit. Jul 15 05:16:26.237278 systemd-logind[1981]: Removed session 9. Jul 15 05:16:31.180447 kubelet[3318]: I0715 05:16:31.180380 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-5ghjj" podStartSLOduration=13.217693257 podStartE2EDuration="17.180359952s" podCreationTimestamp="2025-07-15 05:16:14 +0000 UTC" firstStartedPulling="2025-07-15 05:16:15.360784889 +0000 UTC m=+6.922980670" lastFinishedPulling="2025-07-15 05:16:19.323451582 +0000 UTC m=+10.885647365" observedRunningTime="2025-07-15 05:16:19.631225949 +0000 UTC m=+11.193421751" watchObservedRunningTime="2025-07-15 05:16:31.180359952 +0000 UTC m=+22.742555754" Jul 15 05:16:31.193070 systemd[1]: Created slice kubepods-besteffort-pod5f929daf_ebb1_4526_91c2_015ad217b370.slice - libcontainer container kubepods-besteffort-pod5f929daf_ebb1_4526_91c2_015ad217b370.slice. Jul 15 05:16:31.214938 kubelet[3318]: I0715 05:16:31.214900 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4hnb\" (UniqueName: \"kubernetes.io/projected/5f929daf-ebb1-4526-91c2-015ad217b370-kube-api-access-x4hnb\") pod \"calico-typha-759474d88-5l6fj\" (UID: \"5f929daf-ebb1-4526-91c2-015ad217b370\") " pod="calico-system/calico-typha-759474d88-5l6fj" Jul 15 05:16:31.215177 kubelet[3318]: I0715 05:16:31.215146 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f929daf-ebb1-4526-91c2-015ad217b370-tigera-ca-bundle\") pod \"calico-typha-759474d88-5l6fj\" (UID: \"5f929daf-ebb1-4526-91c2-015ad217b370\") " pod="calico-system/calico-typha-759474d88-5l6fj" Jul 15 05:16:31.215358 kubelet[3318]: I0715 05:16:31.215316 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5f929daf-ebb1-4526-91c2-015ad217b370-typha-certs\") pod \"calico-typha-759474d88-5l6fj\" (UID: \"5f929daf-ebb1-4526-91c2-015ad217b370\") " pod="calico-system/calico-typha-759474d88-5l6fj" Jul 15 05:16:31.504925 containerd[2005]: time="2025-07-15T05:16:31.504778262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-759474d88-5l6fj,Uid:5f929daf-ebb1-4526-91c2-015ad217b370,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:31.537959 containerd[2005]: time="2025-07-15T05:16:31.537905002Z" level=info msg="connecting to shim a055825c0272eab1d3613929b709ca608b6309c8ee11b0ae024e674896bfa008" address="unix:///run/containerd/s/bd35dd59b2e346b6abbd6b9cfcaaf175f66537d4c6592d6b6f023eac61be7577" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:31.574742 systemd[1]: Created slice kubepods-besteffort-pod49a41d38_ce38_4a45_ab76_2e1e04c2258f.slice - libcontainer container kubepods-besteffort-pod49a41d38_ce38_4a45_ab76_2e1e04c2258f.slice. Jul 15 05:16:31.587811 systemd[1]: Started cri-containerd-a055825c0272eab1d3613929b709ca608b6309c8ee11b0ae024e674896bfa008.scope - libcontainer container a055825c0272eab1d3613929b709ca608b6309c8ee11b0ae024e674896bfa008. Jul 15 05:16:31.619168 kubelet[3318]: I0715 05:16:31.619133 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/49a41d38-ce38-4a45-ab76-2e1e04c2258f-flexvol-driver-host\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.619553 kubelet[3318]: I0715 05:16:31.619535 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49a41d38-ce38-4a45-ab76-2e1e04c2258f-var-lib-calico\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.619755 kubelet[3318]: I0715 05:16:31.619724 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49a41d38-ce38-4a45-ab76-2e1e04c2258f-lib-modules\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.620004 kubelet[3318]: I0715 05:16:31.619928 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/49a41d38-ce38-4a45-ab76-2e1e04c2258f-cni-bin-dir\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.620772 kubelet[3318]: I0715 05:16:31.620433 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/49a41d38-ce38-4a45-ab76-2e1e04c2258f-var-run-calico\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.620772 kubelet[3318]: I0715 05:16:31.620462 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/49a41d38-ce38-4a45-ab76-2e1e04c2258f-cni-net-dir\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.620772 kubelet[3318]: I0715 05:16:31.620478 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49a41d38-ce38-4a45-ab76-2e1e04c2258f-xtables-lock\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.620772 kubelet[3318]: I0715 05:16:31.620495 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l26rs\" (UniqueName: \"kubernetes.io/projected/49a41d38-ce38-4a45-ab76-2e1e04c2258f-kube-api-access-l26rs\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.620772 kubelet[3318]: I0715 05:16:31.620515 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/49a41d38-ce38-4a45-ab76-2e1e04c2258f-cni-log-dir\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.621040 kubelet[3318]: I0715 05:16:31.620531 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/49a41d38-ce38-4a45-ab76-2e1e04c2258f-node-certs\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.621040 kubelet[3318]: I0715 05:16:31.620546 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/49a41d38-ce38-4a45-ab76-2e1e04c2258f-policysync\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.621040 kubelet[3318]: I0715 05:16:31.620570 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49a41d38-ce38-4a45-ab76-2e1e04c2258f-tigera-ca-bundle\") pod \"calico-node-95nrq\" (UID: \"49a41d38-ce38-4a45-ab76-2e1e04c2258f\") " pod="calico-system/calico-node-95nrq" Jul 15 05:16:31.650953 containerd[2005]: time="2025-07-15T05:16:31.650804984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-759474d88-5l6fj,Uid:5f929daf-ebb1-4526-91c2-015ad217b370,Namespace:calico-system,Attempt:0,} returns sandbox id \"a055825c0272eab1d3613929b709ca608b6309c8ee11b0ae024e674896bfa008\"" Jul 15 05:16:31.654498 containerd[2005]: time="2025-07-15T05:16:31.654461891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 05:16:31.838998 kubelet[3318]: E0715 05:16:31.838591 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdxxc" podUID="3c8ea6c2-6b54-497c-b3b0-486729d54117" Jul 15 05:16:31.884539 containerd[2005]: time="2025-07-15T05:16:31.883959641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-95nrq,Uid:49a41d38-ce38-4a45-ab76-2e1e04c2258f,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:31.908339 kubelet[3318]: E0715 05:16:31.907464 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.908339 kubelet[3318]: W0715 05:16:31.907492 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.908339 kubelet[3318]: E0715 05:16:31.907520 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.908981 kubelet[3318]: E0715 05:16:31.908861 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.913504 kubelet[3318]: W0715 05:16:31.913464 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.913504 kubelet[3318]: E0715 05:16:31.913509 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.913940 kubelet[3318]: E0715 05:16:31.913894 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.913940 kubelet[3318]: W0715 05:16:31.913914 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.914121 kubelet[3318]: E0715 05:16:31.913934 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.914547 kubelet[3318]: E0715 05:16:31.914524 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.914547 kubelet[3318]: W0715 05:16:31.914541 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.914547 kubelet[3318]: E0715 05:16:31.914557 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.914830 kubelet[3318]: E0715 05:16:31.914816 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.914830 kubelet[3318]: W0715 05:16:31.914827 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.915051 kubelet[3318]: E0715 05:16:31.914841 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.915158 kubelet[3318]: E0715 05:16:31.915141 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.915233 kubelet[3318]: W0715 05:16:31.915158 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.915233 kubelet[3318]: E0715 05:16:31.915172 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.915414 kubelet[3318]: E0715 05:16:31.915373 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.915414 kubelet[3318]: W0715 05:16:31.915384 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.915414 kubelet[3318]: E0715 05:16:31.915397 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.915710 kubelet[3318]: E0715 05:16:31.915691 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.915710 kubelet[3318]: W0715 05:16:31.915706 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.915828 kubelet[3318]: E0715 05:16:31.915722 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.916053 kubelet[3318]: E0715 05:16:31.916035 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.916123 kubelet[3318]: W0715 05:16:31.916050 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.916123 kubelet[3318]: E0715 05:16:31.916088 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.916410 kubelet[3318]: E0715 05:16:31.916395 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.916507 kubelet[3318]: W0715 05:16:31.916410 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.916507 kubelet[3318]: E0715 05:16:31.916425 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.916912 kubelet[3318]: E0715 05:16:31.916825 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.916912 kubelet[3318]: W0715 05:16:31.916843 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.916912 kubelet[3318]: E0715 05:16:31.916862 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.930382 kubelet[3318]: E0715 05:16:31.930343 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.930591 kubelet[3318]: W0715 05:16:31.930568 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.930700 kubelet[3318]: E0715 05:16:31.930685 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.931270 kubelet[3318]: E0715 05:16:31.931105 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.931270 kubelet[3318]: W0715 05:16:31.931137 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.931270 kubelet[3318]: E0715 05:16:31.931170 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.937767 kubelet[3318]: E0715 05:16:31.936713 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.937767 kubelet[3318]: W0715 05:16:31.936740 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.937767 kubelet[3318]: E0715 05:16:31.936775 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.941918 kubelet[3318]: E0715 05:16:31.938084 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.941918 kubelet[3318]: W0715 05:16:31.938103 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.941918 kubelet[3318]: E0715 05:16:31.938133 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.941918 kubelet[3318]: E0715 05:16:31.941848 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.941918 kubelet[3318]: W0715 05:16:31.941913 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.942382 kubelet[3318]: E0715 05:16:31.941938 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.944505 kubelet[3318]: E0715 05:16:31.944271 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.944505 kubelet[3318]: W0715 05:16:31.944292 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.944505 kubelet[3318]: E0715 05:16:31.944320 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.944931 kubelet[3318]: E0715 05:16:31.944722 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.944931 kubelet[3318]: W0715 05:16:31.944735 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.944931 kubelet[3318]: E0715 05:16:31.944759 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.945191 kubelet[3318]: E0715 05:16:31.945151 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.945191 kubelet[3318]: W0715 05:16:31.945164 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.945191 kubelet[3318]: E0715 05:16:31.945187 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.947644 kubelet[3318]: E0715 05:16:31.947609 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.947644 kubelet[3318]: W0715 05:16:31.947641 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.947815 kubelet[3318]: E0715 05:16:31.947659 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.956121 kubelet[3318]: E0715 05:16:31.956084 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.956121 kubelet[3318]: W0715 05:16:31.956120 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.956310 kubelet[3318]: E0715 05:16:31.956147 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.956310 kubelet[3318]: I0715 05:16:31.956216 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frmqm\" (UniqueName: \"kubernetes.io/projected/3c8ea6c2-6b54-497c-b3b0-486729d54117-kube-api-access-frmqm\") pod \"csi-node-driver-vdxxc\" (UID: \"3c8ea6c2-6b54-497c-b3b0-486729d54117\") " pod="calico-system/csi-node-driver-vdxxc" Jul 15 05:16:31.958945 containerd[2005]: time="2025-07-15T05:16:31.957854611Z" level=info msg="connecting to shim c7df9927f86767a64836df93b6a8e038874191355751e1546706095bf17c839c" address="unix:///run/containerd/s/4ad6a2b7440a229d9886ff04de5354bc6f782506f67d270115a82f8cf1a4ffd0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:31.959371 kubelet[3318]: E0715 05:16:31.959338 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.959476 kubelet[3318]: W0715 05:16:31.959372 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.959476 kubelet[3318]: E0715 05:16:31.959403 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.959476 kubelet[3318]: I0715 05:16:31.959442 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3c8ea6c2-6b54-497c-b3b0-486729d54117-socket-dir\") pod \"csi-node-driver-vdxxc\" (UID: \"3c8ea6c2-6b54-497c-b3b0-486729d54117\") " pod="calico-system/csi-node-driver-vdxxc" Jul 15 05:16:31.959996 kubelet[3318]: E0715 05:16:31.959971 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.961004 kubelet[3318]: W0715 05:16:31.959995 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.961004 kubelet[3318]: E0715 05:16:31.960028 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.961004 kubelet[3318]: I0715 05:16:31.960061 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3c8ea6c2-6b54-497c-b3b0-486729d54117-registration-dir\") pod \"csi-node-driver-vdxxc\" (UID: \"3c8ea6c2-6b54-497c-b3b0-486729d54117\") " pod="calico-system/csi-node-driver-vdxxc" Jul 15 05:16:31.961190 kubelet[3318]: E0715 05:16:31.961167 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.961407 kubelet[3318]: W0715 05:16:31.961388 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.961466 kubelet[3318]: E0715 05:16:31.961417 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.961775 kubelet[3318]: I0715 05:16:31.961749 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3c8ea6c2-6b54-497c-b3b0-486729d54117-varrun\") pod \"csi-node-driver-vdxxc\" (UID: \"3c8ea6c2-6b54-497c-b3b0-486729d54117\") " pod="calico-system/csi-node-driver-vdxxc" Jul 15 05:16:31.963012 kubelet[3318]: E0715 05:16:31.962990 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.963012 kubelet[3318]: W0715 05:16:31.963012 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.963238 kubelet[3318]: E0715 05:16:31.963035 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.963238 kubelet[3318]: I0715 05:16:31.963066 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c8ea6c2-6b54-497c-b3b0-486729d54117-kubelet-dir\") pod \"csi-node-driver-vdxxc\" (UID: \"3c8ea6c2-6b54-497c-b3b0-486729d54117\") " pod="calico-system/csi-node-driver-vdxxc" Jul 15 05:16:31.965815 kubelet[3318]: E0715 05:16:31.965339 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.965815 kubelet[3318]: W0715 05:16:31.965362 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.965815 kubelet[3318]: E0715 05:16:31.965400 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.965815 kubelet[3318]: E0715 05:16:31.965665 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.965815 kubelet[3318]: W0715 05:16:31.965678 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.965815 kubelet[3318]: E0715 05:16:31.965694 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.967148 kubelet[3318]: E0715 05:16:31.967051 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.967148 kubelet[3318]: W0715 05:16:31.967069 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.967148 kubelet[3318]: E0715 05:16:31.967095 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.967917 kubelet[3318]: E0715 05:16:31.967831 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.967917 kubelet[3318]: W0715 05:16:31.967846 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.968787 kubelet[3318]: E0715 05:16:31.968712 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.969248 kubelet[3318]: E0715 05:16:31.969230 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.969431 kubelet[3318]: W0715 05:16:31.969249 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.969431 kubelet[3318]: E0715 05:16:31.969275 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.970405 kubelet[3318]: E0715 05:16:31.970368 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.970405 kubelet[3318]: W0715 05:16:31.970390 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.970510 kubelet[3318]: E0715 05:16:31.970416 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.971619 kubelet[3318]: E0715 05:16:31.971143 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.971619 kubelet[3318]: W0715 05:16:31.971170 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.971619 kubelet[3318]: E0715 05:16:31.971188 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.972108 kubelet[3318]: E0715 05:16:31.972089 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.972189 kubelet[3318]: W0715 05:16:31.972115 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.972189 kubelet[3318]: E0715 05:16:31.972132 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.980572 kubelet[3318]: E0715 05:16:31.980173 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.980572 kubelet[3318]: W0715 05:16:31.980226 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.980572 kubelet[3318]: E0715 05:16:31.980258 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.981459 kubelet[3318]: E0715 05:16:31.981046 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.981713 kubelet[3318]: W0715 05:16:31.981560 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.981713 kubelet[3318]: E0715 05:16:31.981590 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.020472 systemd[1]: Started cri-containerd-c7df9927f86767a64836df93b6a8e038874191355751e1546706095bf17c839c.scope - libcontainer container c7df9927f86767a64836df93b6a8e038874191355751e1546706095bf17c839c. Jul 15 05:16:32.068728 kubelet[3318]: E0715 05:16:32.068534 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.068728 kubelet[3318]: W0715 05:16:32.068560 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.068728 kubelet[3318]: E0715 05:16:32.068587 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.069135 kubelet[3318]: E0715 05:16:32.069119 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.069603 kubelet[3318]: W0715 05:16:32.069334 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.069603 kubelet[3318]: E0715 05:16:32.069361 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.070846 kubelet[3318]: E0715 05:16:32.070429 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.070846 kubelet[3318]: W0715 05:16:32.070447 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.070846 kubelet[3318]: E0715 05:16:32.070465 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.071640 kubelet[3318]: E0715 05:16:32.071544 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.072193 kubelet[3318]: W0715 05:16:32.072037 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.072590 kubelet[3318]: E0715 05:16:32.072372 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.073481 kubelet[3318]: E0715 05:16:32.073344 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.073481 kubelet[3318]: W0715 05:16:32.073359 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.073947 kubelet[3318]: E0715 05:16:32.073916 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.075011 kubelet[3318]: E0715 05:16:32.074976 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.075011 kubelet[3318]: W0715 05:16:32.074993 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.075596 kubelet[3318]: E0715 05:16:32.075561 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.075939 kubelet[3318]: E0715 05:16:32.075925 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.076516 kubelet[3318]: W0715 05:16:32.076018 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.076924 kubelet[3318]: E0715 05:16:32.076906 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.077816 kubelet[3318]: E0715 05:16:32.077780 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.077816 kubelet[3318]: W0715 05:16:32.077797 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.078245 kubelet[3318]: E0715 05:16:32.078064 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.078684 kubelet[3318]: E0715 05:16:32.078633 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.078684 kubelet[3318]: W0715 05:16:32.078648 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.079289 kubelet[3318]: E0715 05:16:32.079173 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.080111 kubelet[3318]: E0715 05:16:32.080077 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.080111 kubelet[3318]: W0715 05:16:32.080092 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.081114 kubelet[3318]: E0715 05:16:32.080995 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.081385 kubelet[3318]: E0715 05:16:32.081353 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.081385 kubelet[3318]: W0715 05:16:32.081368 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.081939 kubelet[3318]: E0715 05:16:32.081918 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.083047 kubelet[3318]: E0715 05:16:32.083013 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.083047 kubelet[3318]: W0715 05:16:32.083029 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.083401 kubelet[3318]: E0715 05:16:32.083377 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.083562 kubelet[3318]: E0715 05:16:32.083520 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.083562 kubelet[3318]: W0715 05:16:32.083533 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.084040 kubelet[3318]: E0715 05:16:32.083917 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.084841 kubelet[3318]: E0715 05:16:32.084746 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.084841 kubelet[3318]: W0715 05:16:32.084814 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.085450 kubelet[3318]: E0715 05:16:32.085434 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.086322 kubelet[3318]: E0715 05:16:32.086110 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.086822 kubelet[3318]: W0715 05:16:32.086373 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.086822 kubelet[3318]: E0715 05:16:32.086628 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.087281 kubelet[3318]: E0715 05:16:32.087266 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.087480 kubelet[3318]: W0715 05:16:32.087377 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.087641 kubelet[3318]: E0715 05:16:32.087624 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.088054 kubelet[3318]: E0715 05:16:32.088023 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.088054 kubelet[3318]: W0715 05:16:32.088037 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.088615 kubelet[3318]: E0715 05:16:32.088447 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.088781 kubelet[3318]: E0715 05:16:32.088752 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.088928 kubelet[3318]: W0715 05:16:32.088865 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.089772 kubelet[3318]: E0715 05:16:32.089083 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.091025 kubelet[3318]: E0715 05:16:32.090148 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.091025 kubelet[3318]: W0715 05:16:32.090177 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.091025 kubelet[3318]: E0715 05:16:32.090618 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.091794 kubelet[3318]: E0715 05:16:32.091496 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.091794 kubelet[3318]: W0715 05:16:32.091583 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.092448 kubelet[3318]: E0715 05:16:32.092375 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.092980 kubelet[3318]: E0715 05:16:32.092964 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.093328 kubelet[3318]: W0715 05:16:32.093163 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.093850 kubelet[3318]: E0715 05:16:32.093831 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.094602 kubelet[3318]: E0715 05:16:32.094534 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.094951 kubelet[3318]: W0715 05:16:32.094860 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.098083 kubelet[3318]: E0715 05:16:32.096946 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.098083 kubelet[3318]: E0715 05:16:32.097372 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.098083 kubelet[3318]: W0715 05:16:32.097388 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.098083 kubelet[3318]: E0715 05:16:32.097640 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.098083 kubelet[3318]: W0715 05:16:32.097650 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.098083 kubelet[3318]: E0715 05:16:32.097665 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.098083 kubelet[3318]: E0715 05:16:32.097695 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.103015 kubelet[3318]: E0715 05:16:32.102979 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.103015 kubelet[3318]: W0715 05:16:32.103007 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.103211 kubelet[3318]: E0715 05:16:32.103032 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.120600 kubelet[3318]: E0715 05:16:32.120298 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:32.120600 kubelet[3318]: W0715 05:16:32.120332 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:32.120600 kubelet[3318]: E0715 05:16:32.120369 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:32.136478 containerd[2005]: time="2025-07-15T05:16:32.136217667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-95nrq,Uid:49a41d38-ce38-4a45-ab76-2e1e04c2258f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7df9927f86767a64836df93b6a8e038874191355751e1546706095bf17c839c\"" Jul 15 05:16:33.145559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1801558165.mount: Deactivated successfully. Jul 15 05:16:33.560726 kubelet[3318]: E0715 05:16:33.558978 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdxxc" podUID="3c8ea6c2-6b54-497c-b3b0-486729d54117" Jul 15 05:16:34.390945 containerd[2005]: time="2025-07-15T05:16:34.390862170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:34.394495 containerd[2005]: time="2025-07-15T05:16:34.394362830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 15 05:16:34.399164 containerd[2005]: time="2025-07-15T05:16:34.399122552Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:34.405228 containerd[2005]: time="2025-07-15T05:16:34.405144946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:34.405979 containerd[2005]: time="2025-07-15T05:16:34.405566973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.75091515s" Jul 15 05:16:34.405979 containerd[2005]: time="2025-07-15T05:16:34.405617349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 05:16:34.407843 containerd[2005]: time="2025-07-15T05:16:34.406754302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 05:16:34.421505 containerd[2005]: time="2025-07-15T05:16:34.421466847Z" level=info msg="CreateContainer within sandbox \"a055825c0272eab1d3613929b709ca608b6309c8ee11b0ae024e674896bfa008\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 05:16:34.437785 containerd[2005]: time="2025-07-15T05:16:34.437015481Z" level=info msg="Container 9d4ff1db8206b08067db22e872514ec9bd2bf32216b19b76d6f46082eb80f61e: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:34.452441 containerd[2005]: time="2025-07-15T05:16:34.452273486Z" level=info msg="CreateContainer within sandbox \"a055825c0272eab1d3613929b709ca608b6309c8ee11b0ae024e674896bfa008\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9d4ff1db8206b08067db22e872514ec9bd2bf32216b19b76d6f46082eb80f61e\"" Jul 15 05:16:34.454233 containerd[2005]: time="2025-07-15T05:16:34.453625406Z" level=info msg="StartContainer for \"9d4ff1db8206b08067db22e872514ec9bd2bf32216b19b76d6f46082eb80f61e\"" Jul 15 05:16:34.455731 containerd[2005]: time="2025-07-15T05:16:34.455681702Z" level=info msg="connecting to shim 9d4ff1db8206b08067db22e872514ec9bd2bf32216b19b76d6f46082eb80f61e" address="unix:///run/containerd/s/bd35dd59b2e346b6abbd6b9cfcaaf175f66537d4c6592d6b6f023eac61be7577" protocol=ttrpc version=3 Jul 15 05:16:34.484110 systemd[1]: Started cri-containerd-9d4ff1db8206b08067db22e872514ec9bd2bf32216b19b76d6f46082eb80f61e.scope - libcontainer container 9d4ff1db8206b08067db22e872514ec9bd2bf32216b19b76d6f46082eb80f61e. Jul 15 05:16:34.558774 containerd[2005]: time="2025-07-15T05:16:34.558735132Z" level=info msg="StartContainer for \"9d4ff1db8206b08067db22e872514ec9bd2bf32216b19b76d6f46082eb80f61e\" returns successfully" Jul 15 05:16:34.775918 kubelet[3318]: E0715 05:16:34.775535 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.775918 kubelet[3318]: W0715 05:16:34.775575 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.775918 kubelet[3318]: E0715 05:16:34.775605 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.777286 kubelet[3318]: E0715 05:16:34.777182 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.777286 kubelet[3318]: W0715 05:16:34.777203 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.777286 kubelet[3318]: E0715 05:16:34.777226 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.779161 kubelet[3318]: E0715 05:16:34.779056 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.779161 kubelet[3318]: W0715 05:16:34.779076 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.779161 kubelet[3318]: E0715 05:16:34.779097 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.779639 kubelet[3318]: E0715 05:16:34.779543 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.779639 kubelet[3318]: W0715 05:16:34.779557 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.779639 kubelet[3318]: E0715 05:16:34.779573 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.780058 kubelet[3318]: E0715 05:16:34.779986 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.780058 kubelet[3318]: W0715 05:16:34.779999 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.780058 kubelet[3318]: E0715 05:16:34.780014 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.780543 kubelet[3318]: E0715 05:16:34.780431 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.780543 kubelet[3318]: W0715 05:16:34.780447 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.780543 kubelet[3318]: E0715 05:16:34.780461 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.780996 kubelet[3318]: E0715 05:16:34.780916 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.780996 kubelet[3318]: W0715 05:16:34.780930 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.780996 kubelet[3318]: E0715 05:16:34.780944 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.781907 kubelet[3318]: E0715 05:16:34.781838 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.781907 kubelet[3318]: W0715 05:16:34.781852 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.781907 kubelet[3318]: E0715 05:16:34.781866 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.782376 kubelet[3318]: E0715 05:16:34.782301 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.782376 kubelet[3318]: W0715 05:16:34.782315 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.782376 kubelet[3318]: E0715 05:16:34.782330 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.782764 kubelet[3318]: E0715 05:16:34.782689 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.782764 kubelet[3318]: W0715 05:16:34.782703 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.782764 kubelet[3318]: E0715 05:16:34.782719 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.783231 kubelet[3318]: E0715 05:16:34.783218 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.783453 kubelet[3318]: W0715 05:16:34.783298 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.783453 kubelet[3318]: E0715 05:16:34.783316 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.784002 kubelet[3318]: E0715 05:16:34.783871 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.784002 kubelet[3318]: W0715 05:16:34.783913 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.784002 kubelet[3318]: E0715 05:16:34.783929 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.784716 kubelet[3318]: E0715 05:16:34.784531 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.784716 kubelet[3318]: W0715 05:16:34.784547 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.784716 kubelet[3318]: E0715 05:16:34.784562 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.785141 kubelet[3318]: E0715 05:16:34.785128 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.785921 kubelet[3318]: W0715 05:16:34.785207 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.785921 kubelet[3318]: E0715 05:16:34.785227 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.786252 kubelet[3318]: E0715 05:16:34.786238 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.786406 kubelet[3318]: W0715 05:16:34.786328 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.786406 kubelet[3318]: E0715 05:16:34.786347 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.798905 kubelet[3318]: E0715 05:16:34.798830 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.798905 kubelet[3318]: W0715 05:16:34.798857 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.799276 kubelet[3318]: E0715 05:16:34.799111 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.800023 kubelet[3318]: E0715 05:16:34.799980 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.800023 kubelet[3318]: W0715 05:16:34.799999 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.800483 kubelet[3318]: E0715 05:16:34.800264 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.800681 kubelet[3318]: E0715 05:16:34.800648 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.800681 kubelet[3318]: W0715 05:16:34.800663 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.800874 kubelet[3318]: E0715 05:16:34.800797 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.801710 kubelet[3318]: E0715 05:16:34.801161 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.801710 kubelet[3318]: W0715 05:16:34.801176 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.802126 kubelet[3318]: E0715 05:16:34.801867 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.802327 kubelet[3318]: E0715 05:16:34.802102 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.802327 kubelet[3318]: W0715 05:16:34.802234 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.802327 kubelet[3318]: E0715 05:16:34.802253 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.802801 kubelet[3318]: E0715 05:16:34.802767 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.802801 kubelet[3318]: W0715 05:16:34.802783 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.803104 kubelet[3318]: E0715 05:16:34.803022 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.803299 kubelet[3318]: E0715 05:16:34.803244 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.803299 kubelet[3318]: W0715 05:16:34.803255 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.804241 kubelet[3318]: E0715 05:16:34.804205 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.804406 kubelet[3318]: E0715 05:16:34.804377 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.804406 kubelet[3318]: W0715 05:16:34.804391 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.804613 kubelet[3318]: E0715 05:16:34.804580 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.804906 kubelet[3318]: E0715 05:16:34.804860 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.805025 kubelet[3318]: W0715 05:16:34.804874 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.805181 kubelet[3318]: E0715 05:16:34.805100 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.805320 kubelet[3318]: E0715 05:16:34.805310 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.805397 kubelet[3318]: W0715 05:16:34.805384 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.805874 kubelet[3318]: E0715 05:16:34.805825 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.806186 kubelet[3318]: E0715 05:16:34.806154 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.806186 kubelet[3318]: W0715 05:16:34.806169 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.806380 kubelet[3318]: E0715 05:16:34.806351 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.806925 kubelet[3318]: E0715 05:16:34.806799 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.806925 kubelet[3318]: W0715 05:16:34.806814 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.807623 kubelet[3318]: E0715 05:16:34.807403 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.807950 kubelet[3318]: E0715 05:16:34.807936 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.808041 kubelet[3318]: W0715 05:16:34.808029 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.808287 kubelet[3318]: E0715 05:16:34.808272 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.808697 kubelet[3318]: E0715 05:16:34.808605 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.808697 kubelet[3318]: W0715 05:16:34.808625 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.808811 kubelet[3318]: E0715 05:16:34.808708 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.809996 kubelet[3318]: E0715 05:16:34.809950 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.809996 kubelet[3318]: W0715 05:16:34.809972 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.810653 kubelet[3318]: E0715 05:16:34.810163 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.810653 kubelet[3318]: E0715 05:16:34.810254 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.810653 kubelet[3318]: W0715 05:16:34.810264 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.810653 kubelet[3318]: E0715 05:16:34.810345 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.811654 kubelet[3318]: E0715 05:16:34.811632 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.811654 kubelet[3318]: W0715 05:16:34.811654 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.811774 kubelet[3318]: E0715 05:16:34.811674 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:34.812653 kubelet[3318]: E0715 05:16:34.811915 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:34.812653 kubelet[3318]: W0715 05:16:34.811929 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:34.812653 kubelet[3318]: E0715 05:16:34.811943 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.558428 kubelet[3318]: E0715 05:16:35.558357 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdxxc" podUID="3c8ea6c2-6b54-497c-b3b0-486729d54117" Jul 15 05:16:35.692693 kubelet[3318]: I0715 05:16:35.692664 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:16:35.792765 kubelet[3318]: E0715 05:16:35.792720 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.792765 kubelet[3318]: W0715 05:16:35.792745 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.793292 kubelet[3318]: E0715 05:16:35.792783 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.793363 kubelet[3318]: E0715 05:16:35.793334 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.793363 kubelet[3318]: W0715 05:16:35.793347 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.793363 kubelet[3318]: E0715 05:16:35.793360 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.793682 kubelet[3318]: E0715 05:16:35.793658 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.793923 kubelet[3318]: W0715 05:16:35.793791 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.793923 kubelet[3318]: E0715 05:16:35.793813 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.794101 kubelet[3318]: E0715 05:16:35.794086 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.794139 kubelet[3318]: W0715 05:16:35.794108 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.794139 kubelet[3318]: E0715 05:16:35.794135 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.794352 kubelet[3318]: E0715 05:16:35.794338 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.794352 kubelet[3318]: W0715 05:16:35.794351 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.794417 kubelet[3318]: E0715 05:16:35.794360 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.794616 kubelet[3318]: E0715 05:16:35.794601 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.794654 kubelet[3318]: W0715 05:16:35.794630 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.794654 kubelet[3318]: E0715 05:16:35.794641 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.794839 kubelet[3318]: E0715 05:16:35.794814 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.794839 kubelet[3318]: W0715 05:16:35.794825 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.794913 kubelet[3318]: E0715 05:16:35.794846 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.795077 kubelet[3318]: E0715 05:16:35.795055 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.795077 kubelet[3318]: W0715 05:16:35.795067 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.795077 kubelet[3318]: E0715 05:16:35.795076 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.795259 kubelet[3318]: E0715 05:16:35.795234 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.795259 kubelet[3318]: W0715 05:16:35.795250 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.795372 kubelet[3318]: E0715 05:16:35.795266 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.795904 kubelet[3318]: E0715 05:16:35.795549 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.795904 kubelet[3318]: W0715 05:16:35.795560 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.795904 kubelet[3318]: E0715 05:16:35.795570 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.795904 kubelet[3318]: E0715 05:16:35.795818 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.795904 kubelet[3318]: W0715 05:16:35.795826 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.795904 kubelet[3318]: E0715 05:16:35.795835 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.796086 kubelet[3318]: E0715 05:16:35.796041 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.796086 kubelet[3318]: W0715 05:16:35.796049 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.796086 kubelet[3318]: E0715 05:16:35.796059 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.796362 kubelet[3318]: E0715 05:16:35.796345 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.796362 kubelet[3318]: W0715 05:16:35.796360 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.796440 kubelet[3318]: E0715 05:16:35.796369 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.796527 kubelet[3318]: E0715 05:16:35.796514 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.796527 kubelet[3318]: W0715 05:16:35.796525 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.796592 kubelet[3318]: E0715 05:16:35.796532 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.796709 kubelet[3318]: E0715 05:16:35.796696 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.796738 kubelet[3318]: W0715 05:16:35.796723 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.796738 kubelet[3318]: E0715 05:16:35.796732 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.809843 kubelet[3318]: E0715 05:16:35.808456 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.809843 kubelet[3318]: W0715 05:16:35.808480 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.809843 kubelet[3318]: E0715 05:16:35.808500 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.809843 kubelet[3318]: E0715 05:16:35.808729 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.809843 kubelet[3318]: W0715 05:16:35.808745 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.809843 kubelet[3318]: E0715 05:16:35.808765 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.809843 kubelet[3318]: E0715 05:16:35.809060 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.809843 kubelet[3318]: W0715 05:16:35.809070 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.809843 kubelet[3318]: E0715 05:16:35.809098 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.809843 kubelet[3318]: E0715 05:16:35.809299 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.810342 kubelet[3318]: W0715 05:16:35.809314 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.810342 kubelet[3318]: E0715 05:16:35.809333 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.810342 kubelet[3318]: E0715 05:16:35.809557 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.810342 kubelet[3318]: W0715 05:16:35.809570 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.810342 kubelet[3318]: E0715 05:16:35.809590 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.810809 kubelet[3318]: E0715 05:16:35.809800 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.810809 kubelet[3318]: W0715 05:16:35.810799 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.811016 kubelet[3318]: E0715 05:16:35.810866 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.811049 kubelet[3318]: E0715 05:16:35.811027 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.811049 kubelet[3318]: W0715 05:16:35.811035 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.811131 kubelet[3318]: E0715 05:16:35.811113 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.811231 kubelet[3318]: E0715 05:16:35.811218 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.811231 kubelet[3318]: W0715 05:16:35.811227 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.811385 kubelet[3318]: E0715 05:16:35.811321 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.811430 kubelet[3318]: E0715 05:16:35.811420 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.811459 kubelet[3318]: W0715 05:16:35.811431 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.811459 kubelet[3318]: E0715 05:16:35.811449 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.812089 kubelet[3318]: E0715 05:16:35.811974 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.812089 kubelet[3318]: W0715 05:16:35.811987 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.812089 kubelet[3318]: E0715 05:16:35.812004 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.812596 kubelet[3318]: E0715 05:16:35.812493 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.812596 kubelet[3318]: W0715 05:16:35.812504 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.812596 kubelet[3318]: E0715 05:16:35.812548 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.813164 kubelet[3318]: E0715 05:16:35.812788 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.813164 kubelet[3318]: W0715 05:16:35.812801 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.813264 kubelet[3318]: E0715 05:16:35.813199 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.813264 kubelet[3318]: W0715 05:16:35.813207 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.813264 kubelet[3318]: E0715 05:16:35.813217 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.813378 kubelet[3318]: E0715 05:16:35.813353 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.814018 kubelet[3318]: E0715 05:16:35.813989 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.814018 kubelet[3318]: W0715 05:16:35.814003 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.814123 kubelet[3318]: E0715 05:16:35.814025 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.814355 kubelet[3318]: E0715 05:16:35.814334 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.814355 kubelet[3318]: W0715 05:16:35.814349 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.814524 kubelet[3318]: E0715 05:16:35.814366 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.814584 kubelet[3318]: E0715 05:16:35.814563 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.814584 kubelet[3318]: W0715 05:16:35.814570 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.814725 kubelet[3318]: E0715 05:16:35.814604 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.815104 kubelet[3318]: E0715 05:16:35.815067 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.815242 kubelet[3318]: W0715 05:16:35.815083 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.815242 kubelet[3318]: E0715 05:16:35.815200 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:35.815424 kubelet[3318]: E0715 05:16:35.815385 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:35.815424 kubelet[3318]: W0715 05:16:35.815402 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:35.815424 kubelet[3318]: E0715 05:16:35.815412 3318 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:36.132345 containerd[2005]: time="2025-07-15T05:16:36.132109301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:36.141379 containerd[2005]: time="2025-07-15T05:16:36.141336381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 15 05:16:36.150906 containerd[2005]: time="2025-07-15T05:16:36.149961045Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:36.159605 containerd[2005]: time="2025-07-15T05:16:36.159565321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:36.160141 containerd[2005]: time="2025-07-15T05:16:36.160105035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.753323704s" Jul 15 05:16:36.160141 containerd[2005]: time="2025-07-15T05:16:36.160142193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 05:16:36.169279 containerd[2005]: time="2025-07-15T05:16:36.169244280Z" level=info msg="CreateContainer within sandbox \"c7df9927f86767a64836df93b6a8e038874191355751e1546706095bf17c839c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 05:16:36.200892 containerd[2005]: time="2025-07-15T05:16:36.200831863Z" level=info msg="Container 9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:36.210359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622065143.mount: Deactivated successfully. Jul 15 05:16:36.223708 containerd[2005]: time="2025-07-15T05:16:36.223658394Z" level=info msg="CreateContainer within sandbox \"c7df9927f86767a64836df93b6a8e038874191355751e1546706095bf17c839c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419\"" Jul 15 05:16:36.224331 containerd[2005]: time="2025-07-15T05:16:36.224268412Z" level=info msg="StartContainer for \"9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419\"" Jul 15 05:16:36.225716 containerd[2005]: time="2025-07-15T05:16:36.225680577Z" level=info msg="connecting to shim 9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419" address="unix:///run/containerd/s/4ad6a2b7440a229d9886ff04de5354bc6f782506f67d270115a82f8cf1a4ffd0" protocol=ttrpc version=3 Jul 15 05:16:36.252090 systemd[1]: Started cri-containerd-9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419.scope - libcontainer container 9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419. Jul 15 05:16:36.309067 containerd[2005]: time="2025-07-15T05:16:36.309029247Z" level=info msg="StartContainer for \"9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419\" returns successfully" Jul 15 05:16:36.317061 systemd[1]: cri-containerd-9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419.scope: Deactivated successfully. Jul 15 05:16:36.331441 containerd[2005]: time="2025-07-15T05:16:36.331377946Z" level=info msg="received exit event container_id:\"9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419\" id:\"9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419\" pid:4217 exited_at:{seconds:1752556596 nanos:320352805}" Jul 15 05:16:36.345638 containerd[2005]: time="2025-07-15T05:16:36.345590726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419\" id:\"9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419\" pid:4217 exited_at:{seconds:1752556596 nanos:320352805}" Jul 15 05:16:36.368406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9106f0655553a80f2bd1621cca3060b8c8e53dd0b57ec304a8472c6d9659f419-rootfs.mount: Deactivated successfully. Jul 15 05:16:36.697046 containerd[2005]: time="2025-07-15T05:16:36.697010730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 05:16:36.718632 kubelet[3318]: I0715 05:16:36.718119 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-759474d88-5l6fj" podStartSLOduration=2.965410531 podStartE2EDuration="5.718102487s" podCreationTimestamp="2025-07-15 05:16:31 +0000 UTC" firstStartedPulling="2025-07-15 05:16:31.65374093 +0000 UTC m=+23.215936713" lastFinishedPulling="2025-07-15 05:16:34.406432886 +0000 UTC m=+25.968628669" observedRunningTime="2025-07-15 05:16:34.719537487 +0000 UTC m=+26.281733290" watchObservedRunningTime="2025-07-15 05:16:36.718102487 +0000 UTC m=+28.280298292" Jul 15 05:16:37.556498 kubelet[3318]: E0715 05:16:37.556452 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdxxc" podUID="3c8ea6c2-6b54-497c-b3b0-486729d54117" Jul 15 05:16:39.558997 kubelet[3318]: E0715 05:16:39.557417 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdxxc" podUID="3c8ea6c2-6b54-497c-b3b0-486729d54117" Jul 15 05:16:39.948651 containerd[2005]: time="2025-07-15T05:16:39.948533768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:39.949662 containerd[2005]: time="2025-07-15T05:16:39.949605129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 05:16:39.953041 containerd[2005]: time="2025-07-15T05:16:39.952963415Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:39.953988 containerd[2005]: time="2025-07-15T05:16:39.953196517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.256151074s" Jul 15 05:16:39.953988 containerd[2005]: time="2025-07-15T05:16:39.953225364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 05:16:39.953988 containerd[2005]: time="2025-07-15T05:16:39.953602753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:39.955898 containerd[2005]: time="2025-07-15T05:16:39.955861076Z" level=info msg="CreateContainer within sandbox \"c7df9927f86767a64836df93b6a8e038874191355751e1546706095bf17c839c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 05:16:39.969117 containerd[2005]: time="2025-07-15T05:16:39.969081501Z" level=info msg="Container cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:39.985078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3429033999.mount: Deactivated successfully. Jul 15 05:16:40.001502 containerd[2005]: time="2025-07-15T05:16:40.001420503Z" level=info msg="CreateContainer within sandbox \"c7df9927f86767a64836df93b6a8e038874191355751e1546706095bf17c839c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff\"" Jul 15 05:16:40.002178 containerd[2005]: time="2025-07-15T05:16:40.002024894Z" level=info msg="StartContainer for \"cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff\"" Jul 15 05:16:40.003686 containerd[2005]: time="2025-07-15T05:16:40.003649321Z" level=info msg="connecting to shim cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff" address="unix:///run/containerd/s/4ad6a2b7440a229d9886ff04de5354bc6f782506f67d270115a82f8cf1a4ffd0" protocol=ttrpc version=3 Jul 15 05:16:40.032090 systemd[1]: Started cri-containerd-cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff.scope - libcontainer container cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff. Jul 15 05:16:40.114439 containerd[2005]: time="2025-07-15T05:16:40.114395327Z" level=info msg="StartContainer for \"cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff\" returns successfully" Jul 15 05:16:40.954685 systemd[1]: cri-containerd-cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff.scope: Deactivated successfully. Jul 15 05:16:40.954957 systemd[1]: cri-containerd-cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff.scope: Consumed 552ms CPU time, 164.6M memory peak, 6M read from disk, 171.2M written to disk. Jul 15 05:16:41.035721 containerd[2005]: time="2025-07-15T05:16:41.035608789Z" level=info msg="received exit event container_id:\"cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff\" id:\"cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff\" pid:4276 exited_at:{seconds:1752556601 nanos:35171915}" Jul 15 05:16:41.036675 containerd[2005]: time="2025-07-15T05:16:41.036623631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff\" id:\"cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff\" pid:4276 exited_at:{seconds:1752556601 nanos:35171915}" Jul 15 05:16:41.051067 kubelet[3318]: I0715 05:16:41.051030 3318 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 05:16:41.092667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf897ea9ad442e9830768ae716886483b20822e22c7f8230a0aa9c3e340f96ff-rootfs.mount: Deactivated successfully. Jul 15 05:16:41.156105 kubelet[3318]: I0715 05:16:41.155806 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r4v6\" (UniqueName: \"kubernetes.io/projected/a11110d2-6068-4e9c-9e8e-a93325afe40c-kube-api-access-4r4v6\") pod \"calico-kube-controllers-6656f64997-pdnp6\" (UID: \"a11110d2-6068-4e9c-9e8e-a93325afe40c\") " pod="calico-system/calico-kube-controllers-6656f64997-pdnp6" Jul 15 05:16:41.157008 kubelet[3318]: I0715 05:16:41.155869 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5988605f-13f7-444f-ac86-a1779962cf15-config-volume\") pod \"coredns-7c65d6cfc9-xbgj9\" (UID: \"5988605f-13f7-444f-ac86-a1779962cf15\") " pod="kube-system/coredns-7c65d6cfc9-xbgj9" Jul 15 05:16:41.158185 systemd[1]: Created slice kubepods-burstable-pod5988605f_13f7_444f_ac86_a1779962cf15.slice - libcontainer container kubepods-burstable-pod5988605f_13f7_444f_ac86_a1779962cf15.slice. Jul 15 05:16:41.161793 kubelet[3318]: I0715 05:16:41.156865 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e71f6908-5e30-4a56-99d8-319af24d49f8-calico-apiserver-certs\") pod \"calico-apiserver-d8ff57bfb-4dgwh\" (UID: \"e71f6908-5e30-4a56-99d8-319af24d49f8\") " pod="calico-apiserver/calico-apiserver-d8ff57bfb-4dgwh" Jul 15 05:16:41.161793 kubelet[3318]: I0715 05:16:41.158253 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djrtn\" (UniqueName: \"kubernetes.io/projected/5988605f-13f7-444f-ac86-a1779962cf15-kube-api-access-djrtn\") pod \"coredns-7c65d6cfc9-xbgj9\" (UID: \"5988605f-13f7-444f-ac86-a1779962cf15\") " pod="kube-system/coredns-7c65d6cfc9-xbgj9" Jul 15 05:16:41.161793 kubelet[3318]: I0715 05:16:41.158285 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp64v\" (UniqueName: \"kubernetes.io/projected/e71f6908-5e30-4a56-99d8-319af24d49f8-kube-api-access-tp64v\") pod \"calico-apiserver-d8ff57bfb-4dgwh\" (UID: \"e71f6908-5e30-4a56-99d8-319af24d49f8\") " pod="calico-apiserver/calico-apiserver-d8ff57bfb-4dgwh" Jul 15 05:16:41.161793 kubelet[3318]: I0715 05:16:41.158308 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a11110d2-6068-4e9c-9e8e-a93325afe40c-tigera-ca-bundle\") pod \"calico-kube-controllers-6656f64997-pdnp6\" (UID: \"a11110d2-6068-4e9c-9e8e-a93325afe40c\") " pod="calico-system/calico-kube-controllers-6656f64997-pdnp6" Jul 15 05:16:41.176151 systemd[1]: Created slice kubepods-besteffort-poda11110d2_6068_4e9c_9e8e_a93325afe40c.slice - libcontainer container kubepods-besteffort-poda11110d2_6068_4e9c_9e8e_a93325afe40c.slice. Jul 15 05:16:41.188634 systemd[1]: Created slice kubepods-besteffort-pode71f6908_5e30_4a56_99d8_319af24d49f8.slice - libcontainer container kubepods-besteffort-pode71f6908_5e30_4a56_99d8_319af24d49f8.slice. Jul 15 05:16:41.195863 systemd[1]: Created slice kubepods-besteffort-podfa1bb245_af3c_4846_a1c8_e5bcfe83ea18.slice - libcontainer container kubepods-besteffort-podfa1bb245_af3c_4846_a1c8_e5bcfe83ea18.slice. Jul 15 05:16:41.209036 systemd[1]: Created slice kubepods-burstable-pod6aac3249_80fb_4b1f_a28c_3ac8c29919fc.slice - libcontainer container kubepods-burstable-pod6aac3249_80fb_4b1f_a28c_3ac8c29919fc.slice. Jul 15 05:16:41.221658 systemd[1]: Created slice kubepods-besteffort-pod281753dc_9760_4189_82b0_00b06864b158.slice - libcontainer container kubepods-besteffort-pod281753dc_9760_4189_82b0_00b06864b158.slice. Jul 15 05:16:41.231515 systemd[1]: Created slice kubepods-besteffort-podd33fee1b_fbb1_45ad_a062_d6ec68236181.slice - libcontainer container kubepods-besteffort-podd33fee1b_fbb1_45ad_a062_d6ec68236181.slice. Jul 15 05:16:41.259227 kubelet[3318]: I0715 05:16:41.259182 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281753dc-9760-4189-82b0-00b06864b158-config\") pod \"goldmane-58fd7646b9-w62jd\" (UID: \"281753dc-9760-4189-82b0-00b06864b158\") " pod="calico-system/goldmane-58fd7646b9-w62jd" Jul 15 05:16:41.259227 kubelet[3318]: I0715 05:16:41.259219 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d33fee1b-fbb1-45ad-a062-d6ec68236181-whisker-backend-key-pair\") pod \"whisker-6fd76b7578-w5hzv\" (UID: \"d33fee1b-fbb1-45ad-a062-d6ec68236181\") " pod="calico-system/whisker-6fd76b7578-w5hzv" Jul 15 05:16:41.259393 kubelet[3318]: I0715 05:16:41.259237 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aac3249-80fb-4b1f-a28c-3ac8c29919fc-config-volume\") pod \"coredns-7c65d6cfc9-76jhj\" (UID: \"6aac3249-80fb-4b1f-a28c-3ac8c29919fc\") " pod="kube-system/coredns-7c65d6cfc9-76jhj" Jul 15 05:16:41.259393 kubelet[3318]: I0715 05:16:41.259259 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fa1bb245-af3c-4846-a1c8-e5bcfe83ea18-calico-apiserver-certs\") pod \"calico-apiserver-d8ff57bfb-v9955\" (UID: \"fa1bb245-af3c-4846-a1c8-e5bcfe83ea18\") " pod="calico-apiserver/calico-apiserver-d8ff57bfb-v9955" Jul 15 05:16:41.259393 kubelet[3318]: I0715 05:16:41.259276 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/281753dc-9760-4189-82b0-00b06864b158-goldmane-key-pair\") pod \"goldmane-58fd7646b9-w62jd\" (UID: \"281753dc-9760-4189-82b0-00b06864b158\") " pod="calico-system/goldmane-58fd7646b9-w62jd" Jul 15 05:16:41.259393 kubelet[3318]: I0715 05:16:41.259293 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndz9\" (UniqueName: \"kubernetes.io/projected/d33fee1b-fbb1-45ad-a062-d6ec68236181-kube-api-access-qndz9\") pod \"whisker-6fd76b7578-w5hzv\" (UID: \"d33fee1b-fbb1-45ad-a062-d6ec68236181\") " pod="calico-system/whisker-6fd76b7578-w5hzv" Jul 15 05:16:41.259393 kubelet[3318]: I0715 05:16:41.259309 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/281753dc-9760-4189-82b0-00b06864b158-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-w62jd\" (UID: \"281753dc-9760-4189-82b0-00b06864b158\") " pod="calico-system/goldmane-58fd7646b9-w62jd" Jul 15 05:16:41.259535 kubelet[3318]: I0715 05:16:41.259324 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d33fee1b-fbb1-45ad-a062-d6ec68236181-whisker-ca-bundle\") pod \"whisker-6fd76b7578-w5hzv\" (UID: \"d33fee1b-fbb1-45ad-a062-d6ec68236181\") " pod="calico-system/whisker-6fd76b7578-w5hzv" Jul 15 05:16:41.259535 kubelet[3318]: I0715 05:16:41.259348 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pmw\" (UniqueName: \"kubernetes.io/projected/6aac3249-80fb-4b1f-a28c-3ac8c29919fc-kube-api-access-q4pmw\") pod \"coredns-7c65d6cfc9-76jhj\" (UID: \"6aac3249-80fb-4b1f-a28c-3ac8c29919fc\") " pod="kube-system/coredns-7c65d6cfc9-76jhj" Jul 15 05:16:41.259535 kubelet[3318]: I0715 05:16:41.259406 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh4zv\" (UniqueName: \"kubernetes.io/projected/281753dc-9760-4189-82b0-00b06864b158-kube-api-access-bh4zv\") pod \"goldmane-58fd7646b9-w62jd\" (UID: \"281753dc-9760-4189-82b0-00b06864b158\") " pod="calico-system/goldmane-58fd7646b9-w62jd" Jul 15 05:16:41.259535 kubelet[3318]: I0715 05:16:41.259421 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr692\" (UniqueName: \"kubernetes.io/projected/fa1bb245-af3c-4846-a1c8-e5bcfe83ea18-kube-api-access-cr692\") pod \"calico-apiserver-d8ff57bfb-v9955\" (UID: \"fa1bb245-af3c-4846-a1c8-e5bcfe83ea18\") " pod="calico-apiserver/calico-apiserver-d8ff57bfb-v9955" Jul 15 05:16:41.484555 containerd[2005]: time="2025-07-15T05:16:41.484443197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6656f64997-pdnp6,Uid:a11110d2-6068-4e9c-9e8e-a93325afe40c,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:41.487702 containerd[2005]: time="2025-07-15T05:16:41.487666118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xbgj9,Uid:5988605f-13f7-444f-ac86-a1779962cf15,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:41.494851 containerd[2005]: time="2025-07-15T05:16:41.494741599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8ff57bfb-4dgwh,Uid:e71f6908-5e30-4a56-99d8-319af24d49f8,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:41.507612 containerd[2005]: time="2025-07-15T05:16:41.507379665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8ff57bfb-v9955,Uid:fa1bb245-af3c-4846-a1c8-e5bcfe83ea18,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:41.528268 containerd[2005]: time="2025-07-15T05:16:41.528234938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-76jhj,Uid:6aac3249-80fb-4b1f-a28c-3ac8c29919fc,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:41.545099 containerd[2005]: time="2025-07-15T05:16:41.545063370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd76b7578-w5hzv,Uid:d33fee1b-fbb1-45ad-a062-d6ec68236181,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:41.545385 containerd[2005]: time="2025-07-15T05:16:41.545367955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-w62jd,Uid:281753dc-9760-4189-82b0-00b06864b158,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:41.563512 systemd[1]: Created slice kubepods-besteffort-pod3c8ea6c2_6b54_497c_b3b0_486729d54117.slice - libcontainer container kubepods-besteffort-pod3c8ea6c2_6b54_497c_b3b0_486729d54117.slice. Jul 15 05:16:41.568827 containerd[2005]: time="2025-07-15T05:16:41.568782634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdxxc,Uid:3c8ea6c2-6b54-497c-b3b0-486729d54117,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:41.752715 containerd[2005]: time="2025-07-15T05:16:41.752668893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 05:16:42.476899 containerd[2005]: time="2025-07-15T05:16:42.472478762Z" level=error msg="Failed to destroy network for sandbox \"6efa60476b7cc2a35de48f5bc36e704e12d9fddaf1b78d6a910f2868ad54f45e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.481933 containerd[2005]: time="2025-07-15T05:16:42.478116171Z" level=error msg="Failed to destroy network for sandbox \"1180f4b124a3fb191b79114ccfe9b62621bb45b26adf5490bfeeffa471565816\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.481514 systemd[1]: run-netns-cni\x2dc03f4309\x2ddfa6\x2d4b40\x2dcab4\x2dd9eba562ae77.mount: Deactivated successfully. Jul 15 05:16:42.492779 systemd[1]: run-netns-cni\x2d207c0fa8\x2d28ed\x2debda\x2deacc\x2d2ec3660e53ff.mount: Deactivated successfully. Jul 15 05:16:42.498042 containerd[2005]: time="2025-07-15T05:16:42.497961500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8ff57bfb-v9955,Uid:fa1bb245-af3c-4846-a1c8-e5bcfe83ea18,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6efa60476b7cc2a35de48f5bc36e704e12d9fddaf1b78d6a910f2868ad54f45e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.500464 kubelet[3318]: E0715 05:16:42.500306 3318 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6efa60476b7cc2a35de48f5bc36e704e12d9fddaf1b78d6a910f2868ad54f45e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.501265 kubelet[3318]: E0715 05:16:42.500740 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6efa60476b7cc2a35de48f5bc36e704e12d9fddaf1b78d6a910f2868ad54f45e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8ff57bfb-v9955" Jul 15 05:16:42.501265 kubelet[3318]: E0715 05:16:42.500775 3318 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6efa60476b7cc2a35de48f5bc36e704e12d9fddaf1b78d6a910f2868ad54f45e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8ff57bfb-v9955" Jul 15 05:16:42.501265 kubelet[3318]: E0715 05:16:42.500840 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d8ff57bfb-v9955_calico-apiserver(fa1bb245-af3c-4846-a1c8-e5bcfe83ea18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d8ff57bfb-v9955_calico-apiserver(fa1bb245-af3c-4846-a1c8-e5bcfe83ea18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6efa60476b7cc2a35de48f5bc36e704e12d9fddaf1b78d6a910f2868ad54f45e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8ff57bfb-v9955" podUID="fa1bb245-af3c-4846-a1c8-e5bcfe83ea18" Jul 15 05:16:42.514981 containerd[2005]: time="2025-07-15T05:16:42.512641790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-76jhj,Uid:6aac3249-80fb-4b1f-a28c-3ac8c29919fc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1180f4b124a3fb191b79114ccfe9b62621bb45b26adf5490bfeeffa471565816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.529988 kubelet[3318]: E0715 05:16:42.527802 3318 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1180f4b124a3fb191b79114ccfe9b62621bb45b26adf5490bfeeffa471565816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.529988 kubelet[3318]: E0715 05:16:42.527898 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1180f4b124a3fb191b79114ccfe9b62621bb45b26adf5490bfeeffa471565816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-76jhj" Jul 15 05:16:42.529988 kubelet[3318]: E0715 05:16:42.527928 3318 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1180f4b124a3fb191b79114ccfe9b62621bb45b26adf5490bfeeffa471565816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-76jhj" Jul 15 05:16:42.530241 kubelet[3318]: E0715 05:16:42.527985 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-76jhj_kube-system(6aac3249-80fb-4b1f-a28c-3ac8c29919fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-76jhj_kube-system(6aac3249-80fb-4b1f-a28c-3ac8c29919fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1180f4b124a3fb191b79114ccfe9b62621bb45b26adf5490bfeeffa471565816\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-76jhj" podUID="6aac3249-80fb-4b1f-a28c-3ac8c29919fc" Jul 15 05:16:42.547444 containerd[2005]: time="2025-07-15T05:16:42.547370328Z" level=error msg="Failed to destroy network for sandbox \"378bc91268e9c116182c39c9a5967979a7214805c949160c5a4c777de37c46a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.552902 containerd[2005]: time="2025-07-15T05:16:42.550457563Z" level=error msg="Failed to destroy network for sandbox \"b794c21b0ec267d5b17ea4dd1767c4cdc4983891bf7577b9292c1b4340ebbfa9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.555785 systemd[1]: run-netns-cni\x2d87a34a37\x2d5fb2\x2def7e\x2d9fb6\x2d98f9862e5444.mount: Deactivated successfully. Jul 15 05:16:42.563823 systemd[1]: run-netns-cni\x2db5b3169f\x2d587d\x2dc1b7\x2dac7e\x2d2fd0d8973673.mount: Deactivated successfully. Jul 15 05:16:42.566613 containerd[2005]: time="2025-07-15T05:16:42.559089147Z" level=error msg="Failed to destroy network for sandbox \"ac42003c85ed0c250773e173f1d494915c6d7d42ce504d2eb56c671c5813ec63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.566852 containerd[2005]: time="2025-07-15T05:16:42.566119772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd76b7578-w5hzv,Uid:d33fee1b-fbb1-45ad-a062-d6ec68236181,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"378bc91268e9c116182c39c9a5967979a7214805c949160c5a4c777de37c46a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.567482 kubelet[3318]: E0715 05:16:42.567356 3318 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"378bc91268e9c116182c39c9a5967979a7214805c949160c5a4c777de37c46a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.567657 kubelet[3318]: E0715 05:16:42.567636 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"378bc91268e9c116182c39c9a5967979a7214805c949160c5a4c777de37c46a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fd76b7578-w5hzv" Jul 15 05:16:42.567862 kubelet[3318]: E0715 05:16:42.567830 3318 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"378bc91268e9c116182c39c9a5967979a7214805c949160c5a4c777de37c46a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fd76b7578-w5hzv" Jul 15 05:16:42.569863 kubelet[3318]: E0715 05:16:42.567915 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6fd76b7578-w5hzv_calico-system(d33fee1b-fbb1-45ad-a062-d6ec68236181)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6fd76b7578-w5hzv_calico-system(d33fee1b-fbb1-45ad-a062-d6ec68236181)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"378bc91268e9c116182c39c9a5967979a7214805c949160c5a4c777de37c46a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fd76b7578-w5hzv" podUID="d33fee1b-fbb1-45ad-a062-d6ec68236181" Jul 15 05:16:42.570351 containerd[2005]: time="2025-07-15T05:16:42.568938947Z" level=error msg="Failed to destroy network for sandbox \"b8fb20b2c4524cca0ad278299536dbae86289f142df97dc419177e29f393412d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.572972 containerd[2005]: time="2025-07-15T05:16:42.572715537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6656f64997-pdnp6,Uid:a11110d2-6068-4e9c-9e8e-a93325afe40c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b794c21b0ec267d5b17ea4dd1767c4cdc4983891bf7577b9292c1b4340ebbfa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.573853 kubelet[3318]: E0715 05:16:42.573679 3318 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b794c21b0ec267d5b17ea4dd1767c4cdc4983891bf7577b9292c1b4340ebbfa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.573853 kubelet[3318]: E0715 05:16:42.573739 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b794c21b0ec267d5b17ea4dd1767c4cdc4983891bf7577b9292c1b4340ebbfa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6656f64997-pdnp6" Jul 15 05:16:42.573853 kubelet[3318]: E0715 05:16:42.573765 3318 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b794c21b0ec267d5b17ea4dd1767c4cdc4983891bf7577b9292c1b4340ebbfa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6656f64997-pdnp6" Jul 15 05:16:42.574790 kubelet[3318]: E0715 05:16:42.573816 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6656f64997-pdnp6_calico-system(a11110d2-6068-4e9c-9e8e-a93325afe40c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6656f64997-pdnp6_calico-system(a11110d2-6068-4e9c-9e8e-a93325afe40c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b794c21b0ec267d5b17ea4dd1767c4cdc4983891bf7577b9292c1b4340ebbfa9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6656f64997-pdnp6" podUID="a11110d2-6068-4e9c-9e8e-a93325afe40c" Jul 15 05:16:42.580198 containerd[2005]: time="2025-07-15T05:16:42.580025528Z" level=error msg="Failed to destroy network for sandbox \"0269dfd57bf2b67d6814777118656219777dbeb3aa0a6d303b27c7ef0ca2c2d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.581788 containerd[2005]: time="2025-07-15T05:16:42.581727967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8ff57bfb-4dgwh,Uid:e71f6908-5e30-4a56-99d8-319af24d49f8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac42003c85ed0c250773e173f1d494915c6d7d42ce504d2eb56c671c5813ec63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.582415 kubelet[3318]: E0715 05:16:42.582382 3318 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac42003c85ed0c250773e173f1d494915c6d7d42ce504d2eb56c671c5813ec63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.582600 kubelet[3318]: E0715 05:16:42.582557 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac42003c85ed0c250773e173f1d494915c6d7d42ce504d2eb56c671c5813ec63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8ff57bfb-4dgwh" Jul 15 05:16:42.582831 kubelet[3318]: E0715 05:16:42.582749 3318 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac42003c85ed0c250773e173f1d494915c6d7d42ce504d2eb56c671c5813ec63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8ff57bfb-4dgwh" Jul 15 05:16:42.583151 kubelet[3318]: E0715 05:16:42.582937 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d8ff57bfb-4dgwh_calico-apiserver(e71f6908-5e30-4a56-99d8-319af24d49f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d8ff57bfb-4dgwh_calico-apiserver(e71f6908-5e30-4a56-99d8-319af24d49f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac42003c85ed0c250773e173f1d494915c6d7d42ce504d2eb56c671c5813ec63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8ff57bfb-4dgwh" podUID="e71f6908-5e30-4a56-99d8-319af24d49f8" Jul 15 05:16:42.583601 containerd[2005]: time="2025-07-15T05:16:42.583011002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-w62jd,Uid:281753dc-9760-4189-82b0-00b06864b158,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8fb20b2c4524cca0ad278299536dbae86289f142df97dc419177e29f393412d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.583684 kubelet[3318]: E0715 05:16:42.583267 3318 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8fb20b2c4524cca0ad278299536dbae86289f142df97dc419177e29f393412d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.583684 kubelet[3318]: E0715 05:16:42.583312 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8fb20b2c4524cca0ad278299536dbae86289f142df97dc419177e29f393412d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-w62jd" Jul 15 05:16:42.583684 kubelet[3318]: E0715 05:16:42.583335 3318 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8fb20b2c4524cca0ad278299536dbae86289f142df97dc419177e29f393412d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-w62jd" Jul 15 05:16:42.583813 kubelet[3318]: E0715 05:16:42.583397 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-w62jd_calico-system(281753dc-9760-4189-82b0-00b06864b158)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-w62jd_calico-system(281753dc-9760-4189-82b0-00b06864b158)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8fb20b2c4524cca0ad278299536dbae86289f142df97dc419177e29f393412d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-w62jd" podUID="281753dc-9760-4189-82b0-00b06864b158" Jul 15 05:16:42.584628 containerd[2005]: time="2025-07-15T05:16:42.584435683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xbgj9,Uid:5988605f-13f7-444f-ac86-a1779962cf15,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0269dfd57bf2b67d6814777118656219777dbeb3aa0a6d303b27c7ef0ca2c2d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.584771 containerd[2005]: time="2025-07-15T05:16:42.584648895Z" level=error msg="Failed to destroy network for sandbox \"0de0f6341c8e8643a57fe4852ea89ad329af57f1eeb1a1bf4eccc318e5f01e67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.585729 kubelet[3318]: E0715 05:16:42.584837 3318 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0269dfd57bf2b67d6814777118656219777dbeb3aa0a6d303b27c7ef0ca2c2d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.585729 kubelet[3318]: E0715 05:16:42.584921 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0269dfd57bf2b67d6814777118656219777dbeb3aa0a6d303b27c7ef0ca2c2d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xbgj9" Jul 15 05:16:42.585729 kubelet[3318]: E0715 05:16:42.584973 3318 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0269dfd57bf2b67d6814777118656219777dbeb3aa0a6d303b27c7ef0ca2c2d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xbgj9" Jul 15 05:16:42.586067 kubelet[3318]: E0715 05:16:42.585042 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-xbgj9_kube-system(5988605f-13f7-444f-ac86-a1779962cf15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-xbgj9_kube-system(5988605f-13f7-444f-ac86-a1779962cf15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0269dfd57bf2b67d6814777118656219777dbeb3aa0a6d303b27c7ef0ca2c2d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xbgj9" podUID="5988605f-13f7-444f-ac86-a1779962cf15" Jul 15 05:16:42.586849 containerd[2005]: time="2025-07-15T05:16:42.586531442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdxxc,Uid:3c8ea6c2-6b54-497c-b3b0-486729d54117,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de0f6341c8e8643a57fe4852ea89ad329af57f1eeb1a1bf4eccc318e5f01e67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.587070 kubelet[3318]: E0715 05:16:42.586716 3318 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de0f6341c8e8643a57fe4852ea89ad329af57f1eeb1a1bf4eccc318e5f01e67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:42.587070 kubelet[3318]: E0715 05:16:42.586970 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de0f6341c8e8643a57fe4852ea89ad329af57f1eeb1a1bf4eccc318e5f01e67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vdxxc" Jul 15 05:16:42.587433 kubelet[3318]: E0715 05:16:42.587302 3318 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de0f6341c8e8643a57fe4852ea89ad329af57f1eeb1a1bf4eccc318e5f01e67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vdxxc" Jul 15 05:16:42.587638 kubelet[3318]: E0715 05:16:42.587527 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vdxxc_calico-system(3c8ea6c2-6b54-497c-b3b0-486729d54117)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vdxxc_calico-system(3c8ea6c2-6b54-497c-b3b0-486729d54117)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0de0f6341c8e8643a57fe4852ea89ad329af57f1eeb1a1bf4eccc318e5f01e67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vdxxc" podUID="3c8ea6c2-6b54-497c-b3b0-486729d54117" Jul 15 05:16:43.099930 systemd[1]: run-netns-cni\x2d6dff9a58\x2d8798\x2d7d6e\x2d3284\x2d21f8a96c894b.mount: Deactivated successfully. Jul 15 05:16:43.101147 systemd[1]: run-netns-cni\x2df6f0aab2\x2d92a4\x2d8537\x2d0589\x2d47166bb7e5e1.mount: Deactivated successfully. Jul 15 05:16:43.101465 systemd[1]: run-netns-cni\x2d3ce23fa9\x2def5d\x2d7184\x2d72b9\x2d78eae46a7021.mount: Deactivated successfully. Jul 15 05:16:43.101656 systemd[1]: run-netns-cni\x2dc5c3e0fb\x2d3458\x2dd489\x2d4cca\x2db4a5de857341.mount: Deactivated successfully. Jul 15 05:16:48.454051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903853703.mount: Deactivated successfully. Jul 15 05:16:48.496512 containerd[2005]: time="2025-07-15T05:16:48.495309918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 05:16:48.499466 containerd[2005]: time="2025-07-15T05:16:48.489753796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:48.580043 containerd[2005]: time="2025-07-15T05:16:48.579972206Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:48.583032 containerd[2005]: time="2025-07-15T05:16:48.582967288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:48.583809 containerd[2005]: time="2025-07-15T05:16:48.583765165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.831048673s" Jul 15 05:16:48.583809 containerd[2005]: time="2025-07-15T05:16:48.583804866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 05:16:48.619176 containerd[2005]: time="2025-07-15T05:16:48.619125694Z" level=info msg="CreateContainer within sandbox \"c7df9927f86767a64836df93b6a8e038874191355751e1546706095bf17c839c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 05:16:48.651998 containerd[2005]: time="2025-07-15T05:16:48.651002138Z" level=info msg="Container f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:48.654150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2202982846.mount: Deactivated successfully. Jul 15 05:16:48.683178 containerd[2005]: time="2025-07-15T05:16:48.683133784Z" level=info msg="CreateContainer within sandbox \"c7df9927f86767a64836df93b6a8e038874191355751e1546706095bf17c839c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7\"" Jul 15 05:16:48.684389 containerd[2005]: time="2025-07-15T05:16:48.684277610Z" level=info msg="StartContainer for \"f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7\"" Jul 15 05:16:48.696290 containerd[2005]: time="2025-07-15T05:16:48.696246923Z" level=info msg="connecting to shim f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7" address="unix:///run/containerd/s/4ad6a2b7440a229d9886ff04de5354bc6f782506f67d270115a82f8cf1a4ffd0" protocol=ttrpc version=3 Jul 15 05:16:48.861367 systemd[1]: Started cri-containerd-f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7.scope - libcontainer container f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7. Jul 15 05:16:48.964618 containerd[2005]: time="2025-07-15T05:16:48.964576785Z" level=info msg="StartContainer for \"f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7\" returns successfully" Jul 15 05:16:50.265033 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 05:16:50.298740 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 05:16:50.777284 kubelet[3318]: I0715 05:16:50.777242 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:16:52.872014 kubelet[3318]: I0715 05:16:52.871943 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-95nrq" podStartSLOduration=5.428083316 podStartE2EDuration="21.871916804s" podCreationTimestamp="2025-07-15 05:16:31 +0000 UTC" firstStartedPulling="2025-07-15 05:16:32.14111381 +0000 UTC m=+23.703309599" lastFinishedPulling="2025-07-15 05:16:48.584947296 +0000 UTC m=+40.147143087" observedRunningTime="2025-07-15 05:16:49.79833585 +0000 UTC m=+41.360531653" watchObservedRunningTime="2025-07-15 05:16:52.871916804 +0000 UTC m=+44.434112606" Jul 15 05:16:52.954926 kubelet[3318]: I0715 05:16:52.954726 3318 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qndz9\" (UniqueName: \"kubernetes.io/projected/d33fee1b-fbb1-45ad-a062-d6ec68236181-kube-api-access-qndz9\") pod \"d33fee1b-fbb1-45ad-a062-d6ec68236181\" (UID: \"d33fee1b-fbb1-45ad-a062-d6ec68236181\") " Jul 15 05:16:52.954926 kubelet[3318]: I0715 05:16:52.954796 3318 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d33fee1b-fbb1-45ad-a062-d6ec68236181-whisker-ca-bundle\") pod \"d33fee1b-fbb1-45ad-a062-d6ec68236181\" (UID: \"d33fee1b-fbb1-45ad-a062-d6ec68236181\") " Jul 15 05:16:52.954926 kubelet[3318]: I0715 05:16:52.954816 3318 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d33fee1b-fbb1-45ad-a062-d6ec68236181-whisker-backend-key-pair\") pod \"d33fee1b-fbb1-45ad-a062-d6ec68236181\" (UID: \"d33fee1b-fbb1-45ad-a062-d6ec68236181\") " Jul 15 05:16:52.968410 systemd[1]: var-lib-kubelet-pods-d33fee1b\x2dfbb1\x2d45ad\x2da062\x2dd6ec68236181-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 05:16:52.970100 kubelet[3318]: I0715 05:16:52.970058 3318 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33fee1b-fbb1-45ad-a062-d6ec68236181-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d33fee1b-fbb1-45ad-a062-d6ec68236181" (UID: "d33fee1b-fbb1-45ad-a062-d6ec68236181"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 05:16:52.970778 kubelet[3318]: I0715 05:16:52.970726 3318 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33fee1b-fbb1-45ad-a062-d6ec68236181-kube-api-access-qndz9" (OuterVolumeSpecName: "kube-api-access-qndz9") pod "d33fee1b-fbb1-45ad-a062-d6ec68236181" (UID: "d33fee1b-fbb1-45ad-a062-d6ec68236181"). InnerVolumeSpecName "kube-api-access-qndz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 05:16:52.972420 kubelet[3318]: I0715 05:16:52.971932 3318 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d33fee1b-fbb1-45ad-a062-d6ec68236181-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d33fee1b-fbb1-45ad-a062-d6ec68236181" (UID: "d33fee1b-fbb1-45ad-a062-d6ec68236181"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 05:16:52.972263 systemd[1]: var-lib-kubelet-pods-d33fee1b\x2dfbb1\x2d45ad\x2da062\x2dd6ec68236181-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqndz9.mount: Deactivated successfully. Jul 15 05:16:53.023145 kubelet[3318]: I0715 05:16:53.022746 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:16:53.056783 kubelet[3318]: I0715 05:16:53.056739 3318 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qndz9\" (UniqueName: \"kubernetes.io/projected/d33fee1b-fbb1-45ad-a062-d6ec68236181-kube-api-access-qndz9\") on node \"ip-172-31-23-74\" DevicePath \"\"" Jul 15 05:16:53.056783 kubelet[3318]: I0715 05:16:53.056785 3318 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d33fee1b-fbb1-45ad-a062-d6ec68236181-whisker-backend-key-pair\") on node \"ip-172-31-23-74\" DevicePath \"\"" Jul 15 05:16:53.056968 kubelet[3318]: I0715 05:16:53.056797 3318 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d33fee1b-fbb1-45ad-a062-d6ec68236181-whisker-ca-bundle\") on node \"ip-172-31-23-74\" DevicePath \"\"" Jul 15 05:16:53.562211 containerd[2005]: time="2025-07-15T05:16:53.562162997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xbgj9,Uid:5988605f-13f7-444f-ac86-a1779962cf15,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:53.804030 systemd[1]: Removed slice kubepods-besteffort-podd33fee1b_fbb1_45ad_a062_d6ec68236181.slice - libcontainer container kubepods-besteffort-podd33fee1b_fbb1_45ad_a062_d6ec68236181.slice. Jul 15 05:16:53.948701 systemd[1]: Created slice kubepods-besteffort-pode08a1693_0026_4f90_af15_cd089075a17a.slice - libcontainer container kubepods-besteffort-pode08a1693_0026_4f90_af15_cd089075a17a.slice. Jul 15 05:16:53.964911 kubelet[3318]: I0715 05:16:53.963821 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e08a1693-0026-4f90-af15-cd089075a17a-whisker-backend-key-pair\") pod \"whisker-75bcfd49d-52jpz\" (UID: \"e08a1693-0026-4f90-af15-cd089075a17a\") " pod="calico-system/whisker-75bcfd49d-52jpz" Jul 15 05:16:53.965514 kubelet[3318]: I0715 05:16:53.965382 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e08a1693-0026-4f90-af15-cd089075a17a-whisker-ca-bundle\") pod \"whisker-75bcfd49d-52jpz\" (UID: \"e08a1693-0026-4f90-af15-cd089075a17a\") " pod="calico-system/whisker-75bcfd49d-52jpz" Jul 15 05:16:53.965514 kubelet[3318]: I0715 05:16:53.965462 3318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpgq4\" (UniqueName: \"kubernetes.io/projected/e08a1693-0026-4f90-af15-cd089075a17a-kube-api-access-rpgq4\") pod \"whisker-75bcfd49d-52jpz\" (UID: \"e08a1693-0026-4f90-af15-cd089075a17a\") " pod="calico-system/whisker-75bcfd49d-52jpz" Jul 15 05:16:54.253530 containerd[2005]: time="2025-07-15T05:16:54.253477610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75bcfd49d-52jpz,Uid:e08a1693-0026-4f90-af15-cd089075a17a,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:54.411754 systemd-networkd[1721]: vxlan.calico: Link UP Jul 15 05:16:54.412180 systemd-networkd[1721]: vxlan.calico: Gained carrier Jul 15 05:16:54.474791 (udev-worker)[4812]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:16:54.475201 (udev-worker)[4782]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:16:54.480058 (udev-worker)[4815]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:16:54.560716 containerd[2005]: time="2025-07-15T05:16:54.560581900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-w62jd,Uid:281753dc-9760-4189-82b0-00b06864b158,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:54.561500 containerd[2005]: time="2025-07-15T05:16:54.561262313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-76jhj,Uid:6aac3249-80fb-4b1f-a28c-3ac8c29919fc,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:54.561864 containerd[2005]: time="2025-07-15T05:16:54.561652399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdxxc,Uid:3c8ea6c2-6b54-497c-b3b0-486729d54117,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:54.580905 kubelet[3318]: I0715 05:16:54.578560 3318 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d33fee1b-fbb1-45ad-a062-d6ec68236181" path="/var/lib/kubelet/pods/d33fee1b-fbb1-45ad-a062-d6ec68236181/volumes" Jul 15 05:16:55.440923 kubelet[3318]: I0715 05:16:55.440857 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:16:55.557312 containerd[2005]: time="2025-07-15T05:16:55.557019619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6656f64997-pdnp6,Uid:a11110d2-6068-4e9c-9e8e-a93325afe40c,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:55.703394 containerd[2005]: time="2025-07-15T05:16:55.703265531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7\" id:\"e939d248d0e9c2c3a942e908dc2ac3d045b3b526c4f634ce06451174bfcf2e8e\" pid:4924 exited_at:{seconds:1752556615 nanos:679841939}" Jul 15 05:16:55.829575 containerd[2005]: time="2025-07-15T05:16:55.829536210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7\" id:\"eefe677ac0a08a32c2930110e8bfbe2f76ed42f7ce80985fd2658c1679ab76a4\" pid:4963 exited_at:{seconds:1752556615 nanos:829202480}" Jul 15 05:16:56.156310 systemd-networkd[1721]: vxlan.calico: Gained IPv6LL Jul 15 05:16:56.899642 systemd-networkd[1721]: cali51b8ddb0ac8: Link UP Jul 15 05:16:56.900899 systemd-networkd[1721]: cali51b8ddb0ac8: Gained carrier Jul 15 05:16:56.953112 containerd[2005]: 2025-07-15 05:16:55.663 [INFO][4925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0 calico-kube-controllers-6656f64997- calico-system a11110d2-6068-4e9c-9e8e-a93325afe40c 833 0 2025-07-15 05:16:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6656f64997 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-74 calico-kube-controllers-6656f64997-pdnp6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali51b8ddb0ac8 [] [] }} ContainerID="685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" Namespace="calico-system" Pod="calico-kube-controllers-6656f64997-pdnp6" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-" Jul 15 05:16:56.953112 containerd[2005]: 2025-07-15 05:16:55.664 [INFO][4925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" Namespace="calico-system" Pod="calico-kube-controllers-6656f64997-pdnp6" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0" Jul 15 05:16:56.953112 containerd[2005]: 2025-07-15 05:16:56.769 [INFO][4947] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" HandleID="k8s-pod-network.685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" Workload="ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0" Jul 15 05:16:56.953560 containerd[2005]: 2025-07-15 05:16:56.769 [INFO][4947] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" HandleID="k8s-pod-network.685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" Workload="ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003915e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-74", "pod":"calico-kube-controllers-6656f64997-pdnp6", "timestamp":"2025-07-15 05:16:56.769106079 +0000 UTC"}, Hostname:"ip-172-31-23-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:56.953560 containerd[2005]: 2025-07-15 05:16:56.769 [INFO][4947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:56.953560 containerd[2005]: 2025-07-15 05:16:56.769 [INFO][4947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:56.953560 containerd[2005]: 2025-07-15 05:16:56.770 [INFO][4947] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-74' Jul 15 05:16:56.953560 containerd[2005]: 2025-07-15 05:16:56.780 [INFO][4947] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" host="ip-172-31-23-74" Jul 15 05:16:56.953560 containerd[2005]: 2025-07-15 05:16:56.857 [INFO][4947] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-74" Jul 15 05:16:56.953560 containerd[2005]: 2025-07-15 05:16:56.863 [INFO][4947] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:56.953560 containerd[2005]: 2025-07-15 05:16:56.865 [INFO][4947] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:56.953560 containerd[2005]: 2025-07-15 05:16:56.867 [INFO][4947] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:56.953799 containerd[2005]: 2025-07-15 05:16:56.867 [INFO][4947] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" host="ip-172-31-23-74" Jul 15 05:16:56.953799 containerd[2005]: 2025-07-15 05:16:56.869 [INFO][4947] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6 Jul 15 05:16:56.953799 containerd[2005]: 2025-07-15 05:16:56.876 [INFO][4947] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" host="ip-172-31-23-74" Jul 15 05:16:56.953799 containerd[2005]: 2025-07-15 05:16:56.886 [INFO][4947] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.193/26] block=192.168.114.192/26 handle="k8s-pod-network.685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" host="ip-172-31-23-74" Jul 15 05:16:56.953799 containerd[2005]: 2025-07-15 05:16:56.886 [INFO][4947] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.193/26] handle="k8s-pod-network.685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" host="ip-172-31-23-74" Jul 15 05:16:56.953799 containerd[2005]: 2025-07-15 05:16:56.886 [INFO][4947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:56.953799 containerd[2005]: 2025-07-15 05:16:56.886 [INFO][4947] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.193/26] IPv6=[] ContainerID="685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" HandleID="k8s-pod-network.685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" Workload="ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0" Jul 15 05:16:56.955072 containerd[2005]: 2025-07-15 05:16:56.892 [INFO][4925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" Namespace="calico-system" Pod="calico-kube-controllers-6656f64997-pdnp6" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0", GenerateName:"calico-kube-controllers-6656f64997-", Namespace:"calico-system", SelfLink:"", UID:"a11110d2-6068-4e9c-9e8e-a93325afe40c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6656f64997", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"", Pod:"calico-kube-controllers-6656f64997-pdnp6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51b8ddb0ac8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:56.955161 containerd[2005]: 2025-07-15 05:16:56.892 [INFO][4925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.193/32] ContainerID="685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" Namespace="calico-system" Pod="calico-kube-controllers-6656f64997-pdnp6" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0" Jul 15 05:16:56.955161 containerd[2005]: 2025-07-15 05:16:56.892 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51b8ddb0ac8 ContainerID="685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" Namespace="calico-system" Pod="calico-kube-controllers-6656f64997-pdnp6" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0" Jul 15 05:16:56.955161 containerd[2005]: 2025-07-15 05:16:56.902 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" Namespace="calico-system" Pod="calico-kube-controllers-6656f64997-pdnp6" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0" Jul 15 05:16:56.955235 containerd[2005]: 2025-07-15 05:16:56.903 [INFO][4925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" Namespace="calico-system" Pod="calico-kube-controllers-6656f64997-pdnp6" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0", GenerateName:"calico-kube-controllers-6656f64997-", Namespace:"calico-system", SelfLink:"", UID:"a11110d2-6068-4e9c-9e8e-a93325afe40c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6656f64997", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6", Pod:"calico-kube-controllers-6656f64997-pdnp6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51b8ddb0ac8", MAC:"8a:54:c0:eb:3e:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:56.955295 containerd[2005]: 2025-07-15 05:16:56.948 [INFO][4925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" Namespace="calico-system" Pod="calico-kube-controllers-6656f64997-pdnp6" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--kube--controllers--6656f64997--pdnp6-eth0" Jul 15 05:16:57.040537 systemd-networkd[1721]: calic1b9ef12ecb: Link UP Jul 15 05:16:57.047030 systemd-networkd[1721]: calic1b9ef12ecb: Gained carrier Jul 15 05:16:57.086436 containerd[2005]: 2025-07-15 05:16:54.292 [INFO][4783] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0 whisker-75bcfd49d- calico-system e08a1693-0026-4f90-af15-cd089075a17a 915 0 2025-07-15 05:16:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:75bcfd49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-23-74 whisker-75bcfd49d-52jpz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic1b9ef12ecb [] [] }} ContainerID="4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" Namespace="calico-system" Pod="whisker-75bcfd49d-52jpz" WorkloadEndpoint="ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-" Jul 15 05:16:57.086436 containerd[2005]: 2025-07-15 05:16:54.292 [INFO][4783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" Namespace="calico-system" Pod="whisker-75bcfd49d-52jpz" WorkloadEndpoint="ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0" Jul 15 05:16:57.086436 containerd[2005]: 2025-07-15 05:16:56.769 [INFO][4795] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" HandleID="k8s-pod-network.4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" Workload="ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0" Jul 15 05:16:57.086700 containerd[2005]: 2025-07-15 05:16:56.769 [INFO][4795] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" HandleID="k8s-pod-network.4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" Workload="ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034c460), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-74", "pod":"whisker-75bcfd49d-52jpz", "timestamp":"2025-07-15 05:16:56.768764194 +0000 UTC"}, Hostname:"ip-172-31-23-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:57.086700 containerd[2005]: 2025-07-15 05:16:56.770 [INFO][4795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:57.086700 containerd[2005]: 2025-07-15 05:16:56.886 [INFO][4795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:57.086700 containerd[2005]: 2025-07-15 05:16:56.886 [INFO][4795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-74' Jul 15 05:16:57.086700 containerd[2005]: 2025-07-15 05:16:56.895 [INFO][4795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" host="ip-172-31-23-74" Jul 15 05:16:57.086700 containerd[2005]: 2025-07-15 05:16:56.959 [INFO][4795] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-74" Jul 15 05:16:57.086700 containerd[2005]: 2025-07-15 05:16:56.973 [INFO][4795] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.086700 containerd[2005]: 2025-07-15 05:16:56.977 [INFO][4795] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.086700 containerd[2005]: 2025-07-15 05:16:56.982 [INFO][4795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.088288 containerd[2005]: 2025-07-15 05:16:56.983 [INFO][4795] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" host="ip-172-31-23-74" Jul 15 05:16:57.088288 containerd[2005]: 2025-07-15 05:16:56.986 [INFO][4795] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910 Jul 15 05:16:57.088288 containerd[2005]: 2025-07-15 05:16:56.992 [INFO][4795] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" host="ip-172-31-23-74" Jul 15 05:16:57.088288 containerd[2005]: 2025-07-15 05:16:57.026 [INFO][4795] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.194/26] block=192.168.114.192/26 handle="k8s-pod-network.4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" host="ip-172-31-23-74" Jul 15 05:16:57.088288 containerd[2005]: 2025-07-15 05:16:57.026 [INFO][4795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.194/26] handle="k8s-pod-network.4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" host="ip-172-31-23-74" Jul 15 05:16:57.088288 containerd[2005]: 2025-07-15 05:16:57.026 [INFO][4795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:57.088288 containerd[2005]: 2025-07-15 05:16:57.026 [INFO][4795] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.194/26] IPv6=[] ContainerID="4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" HandleID="k8s-pod-network.4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" Workload="ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0" Jul 15 05:16:57.088459 containerd[2005]: 2025-07-15 05:16:57.036 [INFO][4783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" Namespace="calico-system" Pod="whisker-75bcfd49d-52jpz" WorkloadEndpoint="ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0", GenerateName:"whisker-75bcfd49d-", Namespace:"calico-system", SelfLink:"", UID:"e08a1693-0026-4f90-af15-cd089075a17a", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75bcfd49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"", Pod:"whisker-75bcfd49d-52jpz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic1b9ef12ecb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:57.088459 containerd[2005]: 2025-07-15 05:16:57.036 [INFO][4783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.194/32] ContainerID="4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" Namespace="calico-system" Pod="whisker-75bcfd49d-52jpz" WorkloadEndpoint="ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0" Jul 15 05:16:57.088547 containerd[2005]: 2025-07-15 05:16:57.036 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1b9ef12ecb ContainerID="4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" Namespace="calico-system" Pod="whisker-75bcfd49d-52jpz" WorkloadEndpoint="ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0" Jul 15 05:16:57.088547 containerd[2005]: 2025-07-15 05:16:57.046 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" Namespace="calico-system" Pod="whisker-75bcfd49d-52jpz" WorkloadEndpoint="ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0" Jul 15 05:16:57.088598 containerd[2005]: 2025-07-15 05:16:57.046 [INFO][4783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" Namespace="calico-system" Pod="whisker-75bcfd49d-52jpz" WorkloadEndpoint="ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0", GenerateName:"whisker-75bcfd49d-", Namespace:"calico-system", SelfLink:"", UID:"e08a1693-0026-4f90-af15-cd089075a17a", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75bcfd49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910", Pod:"whisker-75bcfd49d-52jpz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic1b9ef12ecb", MAC:"72:c0:0f:22:9b:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:57.088652 containerd[2005]: 2025-07-15 05:16:57.079 [INFO][4783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" Namespace="calico-system" Pod="whisker-75bcfd49d-52jpz" WorkloadEndpoint="ip--172--31--23--74-k8s-whisker--75bcfd49d--52jpz-eth0" Jul 15 05:16:57.193174 systemd-networkd[1721]: cali53418b569fa: Link UP Jul 15 05:16:57.199800 systemd-networkd[1721]: cali53418b569fa: Gained carrier Jul 15 05:16:57.254416 containerd[2005]: 2025-07-15 05:16:54.763 [INFO][4840] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0 coredns-7c65d6cfc9- kube-system 6aac3249-80fb-4b1f-a28c-3ac8c29919fc 832 0 2025-07-15 05:16:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-74 coredns-7c65d6cfc9-76jhj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali53418b569fa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-76jhj" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-" Jul 15 05:16:57.254416 containerd[2005]: 2025-07-15 05:16:54.765 [INFO][4840] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-76jhj" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0" Jul 15 05:16:57.254416 containerd[2005]: 2025-07-15 05:16:56.769 [INFO][4868] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" HandleID="k8s-pod-network.405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" Workload="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0" Jul 15 05:16:57.255631 containerd[2005]: 2025-07-15 05:16:56.770 [INFO][4868] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" HandleID="k8s-pod-network.405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" Workload="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001009f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-74", "pod":"coredns-7c65d6cfc9-76jhj", "timestamp":"2025-07-15 05:16:56.769795385 +0000 UTC"}, Hostname:"ip-172-31-23-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:57.255631 containerd[2005]: 2025-07-15 05:16:56.770 [INFO][4868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:57.255631 containerd[2005]: 2025-07-15 05:16:57.027 [INFO][4868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:57.255631 containerd[2005]: 2025-07-15 05:16:57.027 [INFO][4868] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-74' Jul 15 05:16:57.255631 containerd[2005]: 2025-07-15 05:16:57.059 [INFO][4868] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" host="ip-172-31-23-74" Jul 15 05:16:57.255631 containerd[2005]: 2025-07-15 05:16:57.083 [INFO][4868] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-74" Jul 15 05:16:57.255631 containerd[2005]: 2025-07-15 05:16:57.112 [INFO][4868] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.255631 containerd[2005]: 2025-07-15 05:16:57.118 [INFO][4868] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.255631 containerd[2005]: 2025-07-15 05:16:57.132 [INFO][4868] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.258036 containerd[2005]: 2025-07-15 05:16:57.132 [INFO][4868] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" host="ip-172-31-23-74" Jul 15 05:16:57.258036 containerd[2005]: 2025-07-15 05:16:57.137 [INFO][4868] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648 Jul 15 05:16:57.258036 containerd[2005]: 2025-07-15 05:16:57.155 [INFO][4868] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" host="ip-172-31-23-74" Jul 15 05:16:57.258036 containerd[2005]: 2025-07-15 05:16:57.165 [INFO][4868] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.195/26] block=192.168.114.192/26 handle="k8s-pod-network.405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" host="ip-172-31-23-74" Jul 15 05:16:57.258036 containerd[2005]: 2025-07-15 05:16:57.165 [INFO][4868] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.195/26] handle="k8s-pod-network.405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" host="ip-172-31-23-74" Jul 15 05:16:57.258036 containerd[2005]: 2025-07-15 05:16:57.166 [INFO][4868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:57.258036 containerd[2005]: 2025-07-15 05:16:57.166 [INFO][4868] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.195/26] IPv6=[] ContainerID="405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" HandleID="k8s-pod-network.405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" Workload="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0" Jul 15 05:16:57.258374 containerd[2005]: 2025-07-15 05:16:57.182 [INFO][4840] cni-plugin/k8s.go 418: Populated endpoint ContainerID="405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-76jhj" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6aac3249-80fb-4b1f-a28c-3ac8c29919fc", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"", Pod:"coredns-7c65d6cfc9-76jhj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53418b569fa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:57.258374 containerd[2005]: 2025-07-15 05:16:57.183 [INFO][4840] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.195/32] ContainerID="405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-76jhj" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0" Jul 15 05:16:57.258374 containerd[2005]: 2025-07-15 05:16:57.183 [INFO][4840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53418b569fa ContainerID="405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-76jhj" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0" Jul 15 05:16:57.258374 containerd[2005]: 2025-07-15 05:16:57.198 [INFO][4840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-76jhj" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0" Jul 15 05:16:57.258374 containerd[2005]: 2025-07-15 05:16:57.200 [INFO][4840] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-76jhj" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6aac3249-80fb-4b1f-a28c-3ac8c29919fc", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648", Pod:"coredns-7c65d6cfc9-76jhj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53418b569fa", MAC:"16:c2:0d:37:2d:63", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:57.258374 containerd[2005]: 2025-07-15 05:16:57.235 [INFO][4840] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-76jhj" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--76jhj-eth0" Jul 15 05:16:57.282244 containerd[2005]: time="2025-07-15T05:16:57.282185016Z" level=info msg="connecting to shim 685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6" address="unix:///run/containerd/s/b45290e1a8357461cf7aa6bdce73a1b1ba62655492a8c2b70d3d34ad0697e276" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:57.299127 containerd[2005]: time="2025-07-15T05:16:57.294436536Z" level=info msg="connecting to shim 4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910" address="unix:///run/containerd/s/59059f37001970755e12f17dd8edc9854f199211459a0e2e0145d0d0d7dbe797" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:57.372785 systemd-networkd[1721]: cali7fa58f77068: Link UP Jul 15 05:16:57.374678 systemd-networkd[1721]: cali7fa58f77068: Gained carrier Jul 15 05:16:57.378785 containerd[2005]: time="2025-07-15T05:16:57.378010858Z" level=info msg="connecting to shim 405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648" address="unix:///run/containerd/s/9c43cce3fbf5c3a919db84031437452b68a561fbb161fa7e920fc74cdb1e7281" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:54.764 [INFO][4825] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0 goldmane-58fd7646b9- calico-system 281753dc-9760-4189-82b0-00b06864b158 834 0 2025-07-15 05:16:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-23-74 goldmane-58fd7646b9-w62jd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7fa58f77068 [] [] }} ContainerID="5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" Namespace="calico-system" Pod="goldmane-58fd7646b9-w62jd" WorkloadEndpoint="ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:54.764 [INFO][4825] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" Namespace="calico-system" Pod="goldmane-58fd7646b9-w62jd" WorkloadEndpoint="ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:56.769 [INFO][4869] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" HandleID="k8s-pod-network.5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" Workload="ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:56.770 [INFO][4869] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" HandleID="k8s-pod-network.5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" Workload="ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000303e00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-74", "pod":"goldmane-58fd7646b9-w62jd", "timestamp":"2025-07-15 05:16:56.769448818 +0000 UTC"}, Hostname:"ip-172-31-23-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:56.770 [INFO][4869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.166 [INFO][4869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.166 [INFO][4869] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-74' Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.197 [INFO][4869] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" host="ip-172-31-23-74" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.231 [INFO][4869] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-74" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.259 [INFO][4869] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.268 [INFO][4869] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.276 [INFO][4869] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.276 [INFO][4869] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" host="ip-172-31-23-74" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.280 [INFO][4869] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973 Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.301 [INFO][4869] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" host="ip-172-31-23-74" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.349 [INFO][4869] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.196/26] block=192.168.114.192/26 handle="k8s-pod-network.5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" host="ip-172-31-23-74" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.349 [INFO][4869] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.196/26] handle="k8s-pod-network.5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" host="ip-172-31-23-74" Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.350 [INFO][4869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:57.443268 containerd[2005]: 2025-07-15 05:16:57.350 [INFO][4869] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.196/26] IPv6=[] ContainerID="5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" HandleID="k8s-pod-network.5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" Workload="ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0" Jul 15 05:16:57.450907 containerd[2005]: 2025-07-15 05:16:57.354 [INFO][4825] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" Namespace="calico-system" Pod="goldmane-58fd7646b9-w62jd" WorkloadEndpoint="ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"281753dc-9760-4189-82b0-00b06864b158", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"", Pod:"goldmane-58fd7646b9-w62jd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7fa58f77068", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:57.450907 containerd[2005]: 2025-07-15 05:16:57.354 [INFO][4825] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.196/32] ContainerID="5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" Namespace="calico-system" Pod="goldmane-58fd7646b9-w62jd" WorkloadEndpoint="ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0" Jul 15 05:16:57.450907 containerd[2005]: 2025-07-15 05:16:57.354 [INFO][4825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fa58f77068 ContainerID="5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" Namespace="calico-system" Pod="goldmane-58fd7646b9-w62jd" WorkloadEndpoint="ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0" Jul 15 05:16:57.450907 containerd[2005]: 2025-07-15 05:16:57.380 [INFO][4825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" Namespace="calico-system" Pod="goldmane-58fd7646b9-w62jd" WorkloadEndpoint="ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0" Jul 15 05:16:57.450907 containerd[2005]: 2025-07-15 05:16:57.390 [INFO][4825] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" Namespace="calico-system" Pod="goldmane-58fd7646b9-w62jd" WorkloadEndpoint="ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"281753dc-9760-4189-82b0-00b06864b158", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973", Pod:"goldmane-58fd7646b9-w62jd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7fa58f77068", MAC:"9a:52:be:cc:0a:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:57.450907 containerd[2005]: 2025-07-15 05:16:57.432 [INFO][4825] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" Namespace="calico-system" Pod="goldmane-58fd7646b9-w62jd" WorkloadEndpoint="ip--172--31--23--74-k8s-goldmane--58fd7646b9--w62jd-eth0" Jul 15 05:16:57.452392 systemd[1]: Started cri-containerd-4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910.scope - libcontainer container 4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910. Jul 15 05:16:57.462595 systemd[1]: Started cri-containerd-685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6.scope - libcontainer container 685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6. Jul 15 05:16:57.518614 systemd[1]: Started cri-containerd-405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648.scope - libcontainer container 405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648. Jul 15 05:16:57.576949 containerd[2005]: time="2025-07-15T05:16:57.576334359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8ff57bfb-v9955,Uid:fa1bb245-af3c-4846-a1c8-e5bcfe83ea18,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:57.578723 containerd[2005]: time="2025-07-15T05:16:57.578669558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8ff57bfb-4dgwh,Uid:e71f6908-5e30-4a56-99d8-319af24d49f8,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:57.590254 containerd[2005]: time="2025-07-15T05:16:57.590185127Z" level=info msg="connecting to shim 5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973" address="unix:///run/containerd/s/89674657d4be1062f642534d8d79cd55a489d9f7fe9035f8052433cfe7ef7b0f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:57.642812 systemd-networkd[1721]: calia47854e085f: Link UP Jul 15 05:16:57.648837 systemd-networkd[1721]: calia47854e085f: Gained carrier Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:53.648 [INFO][4727] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:53.820 [INFO][4727] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0 coredns-7c65d6cfc9- kube-system 5988605f-13f7-444f-ac86-a1779962cf15 828 0 2025-07-15 05:16:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-74 coredns-7c65d6cfc9-xbgj9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia47854e085f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xbgj9" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:53.820 [INFO][4727] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xbgj9" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:56.768 [INFO][4759] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" HandleID="k8s-pod-network.d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" Workload="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:56.770 [INFO][4759] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" HandleID="k8s-pod-network.d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" Workload="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309d00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-74", "pod":"coredns-7c65d6cfc9-xbgj9", "timestamp":"2025-07-15 05:16:56.768776926 +0000 UTC"}, Hostname:"ip-172-31-23-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:56.770 [INFO][4759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.352 [INFO][4759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.352 [INFO][4759] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-74' Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.399 [INFO][4759] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" host="ip-172-31-23-74" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.442 [INFO][4759] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-74" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.467 [INFO][4759] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.472 [INFO][4759] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.479 [INFO][4759] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.479 [INFO][4759] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" host="ip-172-31-23-74" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.484 [INFO][4759] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333 Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.518 [INFO][4759] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" host="ip-172-31-23-74" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.573 [INFO][4759] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.197/26] block=192.168.114.192/26 handle="k8s-pod-network.d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" host="ip-172-31-23-74" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.578 [INFO][4759] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.197/26] handle="k8s-pod-network.d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" host="ip-172-31-23-74" Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.581 [INFO][4759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:57.745564 containerd[2005]: 2025-07-15 05:16:57.581 [INFO][4759] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.197/26] IPv6=[] ContainerID="d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" HandleID="k8s-pod-network.d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" Workload="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0" Jul 15 05:16:57.748584 containerd[2005]: 2025-07-15 05:16:57.612 [INFO][4727] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xbgj9" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5988605f-13f7-444f-ac86-a1779962cf15", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"", Pod:"coredns-7c65d6cfc9-xbgj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia47854e085f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:57.748584 containerd[2005]: 2025-07-15 05:16:57.612 [INFO][4727] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.197/32] ContainerID="d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xbgj9" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0" Jul 15 05:16:57.748584 containerd[2005]: 2025-07-15 05:16:57.612 [INFO][4727] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia47854e085f ContainerID="d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xbgj9" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0" Jul 15 05:16:57.748584 containerd[2005]: 2025-07-15 05:16:57.672 [INFO][4727] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xbgj9" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0" Jul 15 05:16:57.748584 containerd[2005]: 2025-07-15 05:16:57.683 [INFO][4727] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xbgj9" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5988605f-13f7-444f-ac86-a1779962cf15", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333", Pod:"coredns-7c65d6cfc9-xbgj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia47854e085f", MAC:"46:73:55:53:06:de", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:57.748584 containerd[2005]: 2025-07-15 05:16:57.722 [INFO][4727] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xbgj9" WorkloadEndpoint="ip--172--31--23--74-k8s-coredns--7c65d6cfc9--xbgj9-eth0" Jul 15 05:16:57.752204 systemd[1]: Started cri-containerd-5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973.scope - libcontainer container 5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973. Jul 15 05:16:57.776010 systemd-networkd[1721]: calib07fa07687d: Link UP Jul 15 05:16:57.777777 systemd-networkd[1721]: calib07fa07687d: Gained carrier Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:54.736 [INFO][4831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0 csi-node-driver- calico-system 3c8ea6c2-6b54-497c-b3b0-486729d54117 725 0 2025-07-15 05:16:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-74 csi-node-driver-vdxxc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib07fa07687d [] [] }} ContainerID="40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" Namespace="calico-system" Pod="csi-node-driver-vdxxc" WorkloadEndpoint="ip--172--31--23--74-k8s-csi--node--driver--vdxxc-" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:54.736 [INFO][4831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" Namespace="calico-system" Pod="csi-node-driver-vdxxc" WorkloadEndpoint="ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:56.769 [INFO][4862] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" HandleID="k8s-pod-network.40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" Workload="ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:56.769 [INFO][4862] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" HandleID="k8s-pod-network.40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" Workload="ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039a370), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-74", "pod":"csi-node-driver-vdxxc", "timestamp":"2025-07-15 05:16:56.76944817 +0000 UTC"}, Hostname:"ip-172-31-23-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:56.770 [INFO][4862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.580 [INFO][4862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.580 [INFO][4862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-74' Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.618 [INFO][4862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" host="ip-172-31-23-74" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.629 [INFO][4862] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-74" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.661 [INFO][4862] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.680 [INFO][4862] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.697 [INFO][4862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.699 [INFO][4862] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" host="ip-172-31-23-74" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.708 [INFO][4862] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.726 [INFO][4862] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" host="ip-172-31-23-74" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.747 [INFO][4862] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.198/26] block=192.168.114.192/26 handle="k8s-pod-network.40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" host="ip-172-31-23-74" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.748 [INFO][4862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.198/26] handle="k8s-pod-network.40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" host="ip-172-31-23-74" Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.748 [INFO][4862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:57.844688 containerd[2005]: 2025-07-15 05:16:57.748 [INFO][4862] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.198/26] IPv6=[] ContainerID="40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" HandleID="k8s-pod-network.40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" Workload="ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0" Jul 15 05:16:57.847611 containerd[2005]: 2025-07-15 05:16:57.768 [INFO][4831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" Namespace="calico-system" Pod="csi-node-driver-vdxxc" WorkloadEndpoint="ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c8ea6c2-6b54-497c-b3b0-486729d54117", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"", Pod:"csi-node-driver-vdxxc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib07fa07687d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:57.847611 containerd[2005]: 2025-07-15 05:16:57.769 [INFO][4831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.198/32] ContainerID="40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" Namespace="calico-system" Pod="csi-node-driver-vdxxc" WorkloadEndpoint="ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0" Jul 15 05:16:57.847611 containerd[2005]: 2025-07-15 05:16:57.769 [INFO][4831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib07fa07687d ContainerID="40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" Namespace="calico-system" Pod="csi-node-driver-vdxxc" WorkloadEndpoint="ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0" Jul 15 05:16:57.847611 containerd[2005]: 2025-07-15 05:16:57.782 [INFO][4831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" Namespace="calico-system" Pod="csi-node-driver-vdxxc" WorkloadEndpoint="ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0" Jul 15 05:16:57.847611 containerd[2005]: 2025-07-15 05:16:57.784 [INFO][4831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" Namespace="calico-system" Pod="csi-node-driver-vdxxc" WorkloadEndpoint="ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c8ea6c2-6b54-497c-b3b0-486729d54117", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b", Pod:"csi-node-driver-vdxxc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib07fa07687d", MAC:"66:46:87:7e:3b:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:57.847611 containerd[2005]: 2025-07-15 05:16:57.816 [INFO][4831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" Namespace="calico-system" Pod="csi-node-driver-vdxxc" WorkloadEndpoint="ip--172--31--23--74-k8s-csi--node--driver--vdxxc-eth0" Jul 15 05:16:57.897060 containerd[2005]: time="2025-07-15T05:16:57.896925275Z" level=info msg="connecting to shim d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333" address="unix:///run/containerd/s/164931eb055363b373839bf7995555a43c3580e95bbba98e508551f72400091d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:57.921019 containerd[2005]: time="2025-07-15T05:16:57.920860452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-76jhj,Uid:6aac3249-80fb-4b1f-a28c-3ac8c29919fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648\"" Jul 15 05:16:57.950511 containerd[2005]: time="2025-07-15T05:16:57.950463710Z" level=info msg="CreateContainer within sandbox \"405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:16:58.026566 containerd[2005]: time="2025-07-15T05:16:58.026411743Z" level=info msg="connecting to shim 40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b" address="unix:///run/containerd/s/17c4549eccf3110954181371f57f3ecb4ccb6a45a7830e32ca48c4201f6674c5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:58.076445 systemd-networkd[1721]: cali51b8ddb0ac8: Gained IPv6LL Jul 15 05:16:58.117536 systemd[1]: Started cri-containerd-d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333.scope - libcontainer container d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333. Jul 15 05:16:58.171959 containerd[2005]: time="2025-07-15T05:16:58.170664875Z" level=info msg="Container 251a9ccf1fc39ee07db7b8443776c07f2c77666388ab060020f46badb993b3b4: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:58.172139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516946988.mount: Deactivated successfully. Jul 15 05:16:58.182638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2006027582.mount: Deactivated successfully. Jul 15 05:16:58.199281 systemd[1]: Started cri-containerd-40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b.scope - libcontainer container 40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b. Jul 15 05:16:58.230660 containerd[2005]: time="2025-07-15T05:16:58.230523114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75bcfd49d-52jpz,Uid:e08a1693-0026-4f90-af15-cd089075a17a,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910\"" Jul 15 05:16:58.250679 containerd[2005]: time="2025-07-15T05:16:58.250547556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 05:16:58.263076 containerd[2005]: time="2025-07-15T05:16:58.262558435Z" level=info msg="CreateContainer within sandbox \"405449a9cd99ce49581c2cf00e1eb4bbc08958be2d05147ef3b502d4fa003648\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"251a9ccf1fc39ee07db7b8443776c07f2c77666388ab060020f46badb993b3b4\"" Jul 15 05:16:58.271149 containerd[2005]: time="2025-07-15T05:16:58.271064149Z" level=info msg="StartContainer for \"251a9ccf1fc39ee07db7b8443776c07f2c77666388ab060020f46badb993b3b4\"" Jul 15 05:16:58.273386 containerd[2005]: time="2025-07-15T05:16:58.273219143Z" level=info msg="connecting to shim 251a9ccf1fc39ee07db7b8443776c07f2c77666388ab060020f46badb993b3b4" address="unix:///run/containerd/s/9c43cce3fbf5c3a919db84031437452b68a561fbb161fa7e920fc74cdb1e7281" protocol=ttrpc version=3 Jul 15 05:16:58.312248 systemd-networkd[1721]: calif6621c77e73: Link UP Jul 15 05:16:58.323305 systemd-networkd[1721]: calif6621c77e73: Gained carrier Jul 15 05:16:58.349636 systemd[1]: Started cri-containerd-251a9ccf1fc39ee07db7b8443776c07f2c77666388ab060020f46badb993b3b4.scope - libcontainer container 251a9ccf1fc39ee07db7b8443776c07f2c77666388ab060020f46badb993b3b4. Jul 15 05:16:58.386132 containerd[2005]: time="2025-07-15T05:16:58.386093744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xbgj9,Uid:5988605f-13f7-444f-ac86-a1779962cf15,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333\"" Jul 15 05:16:58.396072 systemd-networkd[1721]: cali53418b569fa: Gained IPv6LL Jul 15 05:16:58.409361 containerd[2005]: time="2025-07-15T05:16:58.409317461Z" level=info msg="CreateContainer within sandbox \"d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:57.898 [INFO][5163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0 calico-apiserver-d8ff57bfb- calico-apiserver fa1bb245-af3c-4846-a1c8-e5bcfe83ea18 831 0 2025-07-15 05:16:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d8ff57bfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-74 calico-apiserver-d8ff57bfb-v9955 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif6621c77e73 [] [] }} ContainerID="56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-v9955" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:57.898 [INFO][5163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-v9955" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.145 [INFO][5236] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" HandleID="k8s-pod-network.56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" Workload="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.145 [INFO][5236] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" HandleID="k8s-pod-network.56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" Workload="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033d6c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-74", "pod":"calico-apiserver-d8ff57bfb-v9955", "timestamp":"2025-07-15 05:16:58.145194024 +0000 UTC"}, Hostname:"ip-172-31-23-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.145 [INFO][5236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.145 [INFO][5236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.145 [INFO][5236] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-74' Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.187 [INFO][5236] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" host="ip-172-31-23-74" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.207 [INFO][5236] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-74" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.220 [INFO][5236] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.225 [INFO][5236] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.229 [INFO][5236] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.229 [INFO][5236] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" host="ip-172-31-23-74" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.232 [INFO][5236] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03 Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.241 [INFO][5236] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" host="ip-172-31-23-74" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.265 [INFO][5236] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.199/26] block=192.168.114.192/26 handle="k8s-pod-network.56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" host="ip-172-31-23-74" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.265 [INFO][5236] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.199/26] handle="k8s-pod-network.56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" host="ip-172-31-23-74" Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.265 [INFO][5236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:58.438338 containerd[2005]: 2025-07-15 05:16:58.265 [INFO][5236] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.199/26] IPv6=[] ContainerID="56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" HandleID="k8s-pod-network.56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" Workload="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0" Jul 15 05:16:58.439212 containerd[2005]: 2025-07-15 05:16:58.274 [INFO][5163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-v9955" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0", GenerateName:"calico-apiserver-d8ff57bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa1bb245-af3c-4846-a1c8-e5bcfe83ea18", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8ff57bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"", Pod:"calico-apiserver-d8ff57bfb-v9955", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6621c77e73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:58.439212 containerd[2005]: 2025-07-15 05:16:58.276 [INFO][5163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.199/32] ContainerID="56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-v9955" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0" Jul 15 05:16:58.439212 containerd[2005]: 2025-07-15 05:16:58.276 [INFO][5163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6621c77e73 ContainerID="56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-v9955" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0" Jul 15 05:16:58.439212 containerd[2005]: 2025-07-15 05:16:58.337 [INFO][5163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-v9955" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0" Jul 15 05:16:58.439212 containerd[2005]: 2025-07-15 05:16:58.350 [INFO][5163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-v9955" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0", GenerateName:"calico-apiserver-d8ff57bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa1bb245-af3c-4846-a1c8-e5bcfe83ea18", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8ff57bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03", Pod:"calico-apiserver-d8ff57bfb-v9955", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6621c77e73", MAC:"a2:2d:23:1b:6d:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:58.439212 containerd[2005]: 2025-07-15 05:16:58.424 [INFO][5163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-v9955" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--v9955-eth0" Jul 15 05:16:58.452919 containerd[2005]: time="2025-07-15T05:16:58.452816811Z" level=info msg="Container ef68d2787b8ad53da0bf874ca4f1fdf9d1878a409d72d60528f041644e4e40df: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:58.457524 containerd[2005]: time="2025-07-15T05:16:58.457476480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-w62jd,Uid:281753dc-9760-4189-82b0-00b06864b158,Namespace:calico-system,Attempt:0,} returns sandbox id \"5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973\"" Jul 15 05:16:58.469237 containerd[2005]: time="2025-07-15T05:16:58.469201041Z" level=info msg="CreateContainer within sandbox \"d3ef4d27dbc7a477c8487276cb82d1284ab24228f6a5703416b07d8abb5c2333\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef68d2787b8ad53da0bf874ca4f1fdf9d1878a409d72d60528f041644e4e40df\"" Jul 15 05:16:58.489253 containerd[2005]: time="2025-07-15T05:16:58.489167342Z" level=info msg="StartContainer for \"ef68d2787b8ad53da0bf874ca4f1fdf9d1878a409d72d60528f041644e4e40df\"" Jul 15 05:16:58.490721 systemd-networkd[1721]: cali8a482a95b85: Link UP Jul 15 05:16:58.493771 systemd-networkd[1721]: cali8a482a95b85: Gained carrier Jul 15 05:16:58.516126 containerd[2005]: time="2025-07-15T05:16:58.515944647Z" level=info msg="connecting to shim ef68d2787b8ad53da0bf874ca4f1fdf9d1878a409d72d60528f041644e4e40df" address="unix:///run/containerd/s/164931eb055363b373839bf7995555a43c3580e95bbba98e508551f72400091d" protocol=ttrpc version=3 Jul 15 05:16:58.524129 systemd-networkd[1721]: cali7fa58f77068: Gained IPv6LL Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:57.935 [INFO][5172] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0 calico-apiserver-d8ff57bfb- calico-apiserver e71f6908-5e30-4a56-99d8-319af24d49f8 830 0 2025-07-15 05:16:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d8ff57bfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-74 calico-apiserver-d8ff57bfb-4dgwh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8a482a95b85 [] [] }} ContainerID="060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-4dgwh" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:57.940 [INFO][5172] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-4dgwh" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.284 [INFO][5253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" HandleID="k8s-pod-network.060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" Workload="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.284 [INFO][5253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" HandleID="k8s-pod-network.060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" Workload="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-74", "pod":"calico-apiserver-d8ff57bfb-4dgwh", "timestamp":"2025-07-15 05:16:58.284154607 +0000 UTC"}, Hostname:"ip-172-31-23-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.284 [INFO][5253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.284 [INFO][5253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.284 [INFO][5253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-74' Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.343 [INFO][5253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" host="ip-172-31-23-74" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.369 [INFO][5253] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-74" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.431 [INFO][5253] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.435 [INFO][5253] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.443 [INFO][5253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ip-172-31-23-74" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.443 [INFO][5253] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" host="ip-172-31-23-74" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.446 [INFO][5253] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3 Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.460 [INFO][5253] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" host="ip-172-31-23-74" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.476 [INFO][5253] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.200/26] block=192.168.114.192/26 handle="k8s-pod-network.060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" host="ip-172-31-23-74" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.476 [INFO][5253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.200/26] handle="k8s-pod-network.060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" host="ip-172-31-23-74" Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.477 [INFO][5253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:58.569850 containerd[2005]: 2025-07-15 05:16:58.477 [INFO][5253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.200/26] IPv6=[] ContainerID="060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" HandleID="k8s-pod-network.060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" Workload="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0" Jul 15 05:16:58.574324 containerd[2005]: 2025-07-15 05:16:58.481 [INFO][5172] cni-plugin/k8s.go 418: Populated endpoint ContainerID="060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-4dgwh" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0", GenerateName:"calico-apiserver-d8ff57bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e71f6908-5e30-4a56-99d8-319af24d49f8", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8ff57bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"", Pod:"calico-apiserver-d8ff57bfb-4dgwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a482a95b85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:58.574324 containerd[2005]: 2025-07-15 05:16:58.481 [INFO][5172] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.200/32] ContainerID="060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-4dgwh" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0" Jul 15 05:16:58.574324 containerd[2005]: 2025-07-15 05:16:58.481 [INFO][5172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a482a95b85 ContainerID="060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-4dgwh" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0" Jul 15 05:16:58.574324 containerd[2005]: 2025-07-15 05:16:58.498 [INFO][5172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-4dgwh" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0" Jul 15 05:16:58.574324 containerd[2005]: 2025-07-15 05:16:58.511 [INFO][5172] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-4dgwh" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0", GenerateName:"calico-apiserver-d8ff57bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e71f6908-5e30-4a56-99d8-319af24d49f8", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8ff57bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-74", ContainerID:"060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3", Pod:"calico-apiserver-d8ff57bfb-4dgwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a482a95b85", MAC:"3a:de:95:5b:e6:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:58.574324 containerd[2005]: 2025-07-15 05:16:58.563 [INFO][5172] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" Namespace="calico-apiserver" Pod="calico-apiserver-d8ff57bfb-4dgwh" WorkloadEndpoint="ip--172--31--23--74-k8s-calico--apiserver--d8ff57bfb--4dgwh-eth0" Jul 15 05:16:58.580156 containerd[2005]: time="2025-07-15T05:16:58.580094333Z" level=info msg="connecting to shim 56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03" address="unix:///run/containerd/s/104dca45853d5f8e97e88ce6d1024ec7709c276bc9f46ddae254bd935a9b67a4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:58.630723 systemd[1]: Started cri-containerd-ef68d2787b8ad53da0bf874ca4f1fdf9d1878a409d72d60528f041644e4e40df.scope - libcontainer container ef68d2787b8ad53da0bf874ca4f1fdf9d1878a409d72d60528f041644e4e40df. Jul 15 05:16:58.633115 containerd[2005]: time="2025-07-15T05:16:58.633057379Z" level=info msg="StartContainer for \"251a9ccf1fc39ee07db7b8443776c07f2c77666388ab060020f46badb993b3b4\" returns successfully" Jul 15 05:16:58.656128 containerd[2005]: time="2025-07-15T05:16:58.655178696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6656f64997-pdnp6,Uid:a11110d2-6068-4e9c-9e8e-a93325afe40c,Namespace:calico-system,Attempt:0,} returns sandbox id \"685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6\"" Jul 15 05:16:58.660545 containerd[2005]: time="2025-07-15T05:16:58.660497067Z" level=info msg="connecting to shim 060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3" address="unix:///run/containerd/s/8e5ef1cd0d1ce5aa76ceff5f350687cd36d3b70a89f81e4a226adab4eda158f7" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:58.721396 systemd[1]: Started cri-containerd-060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3.scope - libcontainer container 060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3. Jul 15 05:16:58.727797 systemd[1]: Started cri-containerd-56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03.scope - libcontainer container 56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03. Jul 15 05:16:58.781526 systemd-networkd[1721]: calia47854e085f: Gained IPv6LL Jul 15 05:16:58.825358 containerd[2005]: time="2025-07-15T05:16:58.824818124Z" level=info msg="StartContainer for \"ef68d2787b8ad53da0bf874ca4f1fdf9d1878a409d72d60528f041644e4e40df\" returns successfully" Jul 15 05:16:58.844971 systemd-networkd[1721]: calic1b9ef12ecb: Gained IPv6LL Jul 15 05:16:58.849760 containerd[2005]: time="2025-07-15T05:16:58.849715444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdxxc,Uid:3c8ea6c2-6b54-497c-b3b0-486729d54117,Namespace:calico-system,Attempt:0,} returns sandbox id \"40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b\"" Jul 15 05:16:58.873907 kubelet[3318]: I0715 05:16:58.873547 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xbgj9" podStartSLOduration=44.873519967 podStartE2EDuration="44.873519967s" podCreationTimestamp="2025-07-15 05:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:58.871409871 +0000 UTC m=+50.433605704" watchObservedRunningTime="2025-07-15 05:16:58.873519967 +0000 UTC m=+50.435715773" Jul 15 05:16:59.032784 containerd[2005]: time="2025-07-15T05:16:59.032718767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8ff57bfb-4dgwh,Uid:e71f6908-5e30-4a56-99d8-319af24d49f8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3\"" Jul 15 05:16:59.142034 containerd[2005]: time="2025-07-15T05:16:59.141674767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8ff57bfb-v9955,Uid:fa1bb245-af3c-4846-a1c8-e5bcfe83ea18,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03\"" Jul 15 05:16:59.165011 systemd-networkd[1721]: calib07fa07687d: Gained IPv6LL Jul 15 05:16:59.677984 systemd-networkd[1721]: calif6621c77e73: Gained IPv6LL Jul 15 05:17:00.031744 containerd[2005]: time="2025-07-15T05:17:00.031667044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:00.035649 containerd[2005]: time="2025-07-15T05:17:00.035593039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 05:17:00.040085 containerd[2005]: time="2025-07-15T05:17:00.039970340Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:00.044964 containerd[2005]: time="2025-07-15T05:17:00.044893351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:00.047170 containerd[2005]: time="2025-07-15T05:17:00.047126339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.796530699s" Jul 15 05:17:00.047366 containerd[2005]: time="2025-07-15T05:17:00.047345212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 05:17:00.049135 containerd[2005]: time="2025-07-15T05:17:00.049096491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 05:17:00.051895 containerd[2005]: time="2025-07-15T05:17:00.051770191Z" level=info msg="CreateContainer within sandbox \"4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 05:17:00.072159 containerd[2005]: time="2025-07-15T05:17:00.072095291Z" level=info msg="Container 3444c55b8317250cb0fb50d27dfb3d024817c890b1d7374b78245464a6faaa19: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:00.089467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3997036249.mount: Deactivated successfully. Jul 15 05:17:00.106745 containerd[2005]: time="2025-07-15T05:17:00.106683176Z" level=info msg="CreateContainer within sandbox \"4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3444c55b8317250cb0fb50d27dfb3d024817c890b1d7374b78245464a6faaa19\"" Jul 15 05:17:00.108941 containerd[2005]: time="2025-07-15T05:17:00.108861452Z" level=info msg="StartContainer for \"3444c55b8317250cb0fb50d27dfb3d024817c890b1d7374b78245464a6faaa19\"" Jul 15 05:17:00.113039 containerd[2005]: time="2025-07-15T05:17:00.112982655Z" level=info msg="connecting to shim 3444c55b8317250cb0fb50d27dfb3d024817c890b1d7374b78245464a6faaa19" address="unix:///run/containerd/s/59059f37001970755e12f17dd8edc9854f199211459a0e2e0145d0d0d7dbe797" protocol=ttrpc version=3 Jul 15 05:17:00.127639 systemd-networkd[1721]: cali8a482a95b85: Gained IPv6LL Jul 15 05:17:00.244410 systemd[1]: Started cri-containerd-3444c55b8317250cb0fb50d27dfb3d024817c890b1d7374b78245464a6faaa19.scope - libcontainer container 3444c55b8317250cb0fb50d27dfb3d024817c890b1d7374b78245464a6faaa19. Jul 15 05:17:00.251643 systemd[1]: Started sshd@9-172.31.23.74:22-139.178.89.65:46176.service - OpenSSH per-connection server daemon (139.178.89.65:46176). Jul 15 05:17:00.461902 containerd[2005]: time="2025-07-15T05:17:00.461650355Z" level=info msg="StartContainer for \"3444c55b8317250cb0fb50d27dfb3d024817c890b1d7374b78245464a6faaa19\" returns successfully" Jul 15 05:17:00.549749 sshd[5557]: Accepted publickey for core from 139.178.89.65 port 46176 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:00.555930 sshd-session[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:00.573992 systemd-logind[1981]: New session 10 of user core. Jul 15 05:17:00.577083 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 05:17:00.993519 kubelet[3318]: I0715 05:17:00.992935 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-76jhj" podStartSLOduration=46.992904981 podStartE2EDuration="46.992904981s" podCreationTimestamp="2025-07-15 05:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:58.90859143 +0000 UTC m=+50.470787235" watchObservedRunningTime="2025-07-15 05:17:00.992904981 +0000 UTC m=+52.555100786" Jul 15 05:17:01.686317 sshd[5576]: Connection closed by 139.178.89.65 port 46176 Jul 15 05:17:01.686990 sshd-session[5557]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:01.705110 systemd[1]: sshd@9-172.31.23.74:22-139.178.89.65:46176.service: Deactivated successfully. Jul 15 05:17:01.708579 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 05:17:01.710285 systemd-logind[1981]: Session 10 logged out. Waiting for processes to exit. Jul 15 05:17:01.724324 systemd-logind[1981]: Removed session 10. Jul 15 05:17:02.830773 ntpd[1973]: Listen normally on 7 vxlan.calico 192.168.114.192:123 Jul 15 05:17:02.830873 ntpd[1973]: Listen normally on 8 vxlan.calico [fe80::64f1:d6ff:fe5e:33de%4]:123 Jul 15 05:17:02.831340 ntpd[1973]: 15 Jul 05:17:02 ntpd[1973]: Listen normally on 7 vxlan.calico 192.168.114.192:123 Jul 15 05:17:02.835205 ntpd[1973]: 15 Jul 05:17:02 ntpd[1973]: Listen normally on 8 vxlan.calico [fe80::64f1:d6ff:fe5e:33de%4]:123 Jul 15 05:17:02.835205 ntpd[1973]: 15 Jul 05:17:02 ntpd[1973]: Listen normally on 9 cali51b8ddb0ac8 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 15 05:17:02.835205 ntpd[1973]: 15 Jul 05:17:02 ntpd[1973]: Listen normally on 10 calic1b9ef12ecb [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 05:17:02.835205 ntpd[1973]: 15 Jul 05:17:02 ntpd[1973]: Listen normally on 11 cali53418b569fa [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 05:17:02.835205 ntpd[1973]: 15 Jul 05:17:02 ntpd[1973]: Listen normally on 12 cali7fa58f77068 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 05:17:02.835205 ntpd[1973]: 15 Jul 05:17:02 ntpd[1973]: Listen normally on 13 calia47854e085f [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 05:17:02.835205 ntpd[1973]: 15 Jul 05:17:02 ntpd[1973]: Listen normally on 14 calib07fa07687d [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 05:17:02.835205 ntpd[1973]: 15 Jul 05:17:02 ntpd[1973]: Listen normally on 15 calif6621c77e73 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 05:17:02.835205 ntpd[1973]: 15 Jul 05:17:02 ntpd[1973]: Listen normally on 16 cali8a482a95b85 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 05:17:02.834179 ntpd[1973]: Listen normally on 9 cali51b8ddb0ac8 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 15 05:17:02.834233 ntpd[1973]: Listen normally on 10 calic1b9ef12ecb [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 05:17:02.834270 ntpd[1973]: Listen normally on 11 cali53418b569fa [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 05:17:02.834305 ntpd[1973]: Listen normally on 12 cali7fa58f77068 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 05:17:02.834342 ntpd[1973]: Listen normally on 13 calia47854e085f [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 05:17:02.834379 ntpd[1973]: Listen normally on 14 calib07fa07687d [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 05:17:02.834415 ntpd[1973]: Listen normally on 15 calif6621c77e73 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 05:17:02.834454 ntpd[1973]: Listen normally on 16 cali8a482a95b85 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 05:17:03.650587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2587132655.mount: Deactivated successfully. Jul 15 05:17:05.396830 containerd[2005]: time="2025-07-15T05:17:05.396775331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:05.408237 containerd[2005]: time="2025-07-15T05:17:05.407863273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 05:17:05.478234 containerd[2005]: time="2025-07-15T05:17:05.478176642Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:05.494102 containerd[2005]: time="2025-07-15T05:17:05.493787358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:05.496783 containerd[2005]: time="2025-07-15T05:17:05.496740430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.445032757s" Jul 15 05:17:05.496783 containerd[2005]: time="2025-07-15T05:17:05.496783578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 05:17:05.510785 containerd[2005]: time="2025-07-15T05:17:05.510568770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 05:17:05.552910 containerd[2005]: time="2025-07-15T05:17:05.551288628Z" level=info msg="CreateContainer within sandbox \"5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 05:17:05.664799 containerd[2005]: time="2025-07-15T05:17:05.664700927Z" level=info msg="Container fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:05.737688 containerd[2005]: time="2025-07-15T05:17:05.737642292Z" level=info msg="CreateContainer within sandbox \"5bd3ebd1a110e1845761cfa6901d48c5bb629f4dccdf29ccc1c2ab9f662f8973\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134\"" Jul 15 05:17:05.738687 containerd[2005]: time="2025-07-15T05:17:05.738661283Z" level=info msg="StartContainer for \"fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134\"" Jul 15 05:17:05.765498 containerd[2005]: time="2025-07-15T05:17:05.765449774Z" level=info msg="connecting to shim fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134" address="unix:///run/containerd/s/89674657d4be1062f642534d8d79cd55a489d9f7fe9035f8052433cfe7ef7b0f" protocol=ttrpc version=3 Jul 15 05:17:05.916338 systemd[1]: Started cri-containerd-fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134.scope - libcontainer container fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134. Jul 15 05:17:06.055471 containerd[2005]: time="2025-07-15T05:17:06.055268345Z" level=info msg="StartContainer for \"fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134\" returns successfully" Jul 15 05:17:06.731206 systemd[1]: Started sshd@10-172.31.23.74:22-139.178.89.65:46182.service - OpenSSH per-connection server daemon (139.178.89.65:46182). Jul 15 05:17:07.133950 sshd[5677]: Accepted publickey for core from 139.178.89.65 port 46182 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:07.135196 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:07.144133 systemd-logind[1981]: New session 11 of user core. Jul 15 05:17:07.151708 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 05:17:07.509582 containerd[2005]: time="2025-07-15T05:17:07.509522904Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134\" id:\"d57d7754d2734b0586213dfa784b651846c2ac92b596dbf57b7cc6e641f76454\" pid:5692 exit_status:1 exited_at:{seconds:1752556627 nanos:476118935}" Jul 15 05:17:07.967494 sshd[5699]: Connection closed by 139.178.89.65 port 46182 Jul 15 05:17:07.976840 systemd[1]: sshd@10-172.31.23.74:22-139.178.89.65:46182.service: Deactivated successfully. Jul 15 05:17:07.968937 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:07.980936 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 05:17:07.984690 systemd-logind[1981]: Session 11 logged out. Waiting for processes to exit. Jul 15 05:17:07.991065 systemd-logind[1981]: Removed session 11. Jul 15 05:17:08.148953 containerd[2005]: time="2025-07-15T05:17:08.148853559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134\" id:\"11673f6824091f6e8714befe9c9a3ed6b022228042ac6e29ea9f06c5da7914a5\" pid:5729 exit_status:1 exited_at:{seconds:1752556628 nanos:148572192}" Jul 15 05:17:09.645963 containerd[2005]: time="2025-07-15T05:17:09.645842814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134\" id:\"ffff53f80290bca4cdc4f89e487b65661b2021812bae252da099bc123ff18990\" pid:5758 exit_status:1 exited_at:{seconds:1752556629 nanos:632056301}" Jul 15 05:17:11.096825 containerd[2005]: time="2025-07-15T05:17:11.096772795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:11.099192 containerd[2005]: time="2025-07-15T05:17:11.099126649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 05:17:11.101114 containerd[2005]: time="2025-07-15T05:17:11.100551730Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:11.104930 containerd[2005]: time="2025-07-15T05:17:11.104892945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:11.106332 containerd[2005]: time="2025-07-15T05:17:11.106251584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.595637415s" Jul 15 05:17:11.106332 containerd[2005]: time="2025-07-15T05:17:11.106298443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 05:17:11.108428 containerd[2005]: time="2025-07-15T05:17:11.108396110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 05:17:11.149266 containerd[2005]: time="2025-07-15T05:17:11.149219608Z" level=info msg="CreateContainer within sandbox \"685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 05:17:11.161945 containerd[2005]: time="2025-07-15T05:17:11.161416270Z" level=info msg="Container 657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:11.190638 containerd[2005]: time="2025-07-15T05:17:11.190207255Z" level=info msg="CreateContainer within sandbox \"685f1bfc91f8d7c6874940d3dba3738575dba1f2370f1bd48e2c6ed5318f56f6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d\"" Jul 15 05:17:11.194954 containerd[2005]: time="2025-07-15T05:17:11.191139452Z" level=info msg="StartContainer for \"657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d\"" Jul 15 05:17:11.194954 containerd[2005]: time="2025-07-15T05:17:11.194220815Z" level=info msg="connecting to shim 657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d" address="unix:///run/containerd/s/b45290e1a8357461cf7aa6bdce73a1b1ba62655492a8c2b70d3d34ad0697e276" protocol=ttrpc version=3 Jul 15 05:17:11.240221 systemd[1]: Started cri-containerd-657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d.scope - libcontainer container 657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d. Jul 15 05:17:11.378976 containerd[2005]: time="2025-07-15T05:17:11.376811198Z" level=info msg="StartContainer for \"657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d\" returns successfully" Jul 15 05:17:11.719799 containerd[2005]: time="2025-07-15T05:17:11.719617647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134\" id:\"3033d427f824150fdd8c202dc4897bf78474003aaf5fc044652600a70e420782\" pid:5830 exit_status:1 exited_at:{seconds:1752556631 nanos:718865287}" Jul 15 05:17:12.053146 kubelet[3318]: I0715 05:17:12.053052 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-w62jd" podStartSLOduration=35.004890772 podStartE2EDuration="42.05104861s" podCreationTimestamp="2025-07-15 05:16:30 +0000 UTC" firstStartedPulling="2025-07-15 05:16:58.464164483 +0000 UTC m=+50.026360277" lastFinishedPulling="2025-07-15 05:17:05.510322333 +0000 UTC m=+57.072518115" observedRunningTime="2025-07-15 05:17:07.109803985 +0000 UTC m=+58.671999814" watchObservedRunningTime="2025-07-15 05:17:12.05104861 +0000 UTC m=+63.613244415" Jul 15 05:17:12.111721 containerd[2005]: time="2025-07-15T05:17:12.111672092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d\" id:\"f52b13dc9fe81ffb671d50fef0a0d60ba33280333a458e162b8dc9e3e7780efe\" pid:5853 exited_at:{seconds:1752556632 nanos:109374770}" Jul 15 05:17:12.138612 kubelet[3318]: I0715 05:17:12.138478 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6656f64997-pdnp6" podStartSLOduration=28.695158317 podStartE2EDuration="41.138456256s" podCreationTimestamp="2025-07-15 05:16:31 +0000 UTC" firstStartedPulling="2025-07-15 05:16:58.664717402 +0000 UTC m=+50.226913196" lastFinishedPulling="2025-07-15 05:17:11.108015332 +0000 UTC m=+62.670211135" observedRunningTime="2025-07-15 05:17:12.054474533 +0000 UTC m=+63.616670332" watchObservedRunningTime="2025-07-15 05:17:12.138456256 +0000 UTC m=+63.700652058" Jul 15 05:17:12.528475 containerd[2005]: time="2025-07-15T05:17:12.528424749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:12.531658 containerd[2005]: time="2025-07-15T05:17:12.531614300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 05:17:12.532630 containerd[2005]: time="2025-07-15T05:17:12.532594956Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:12.540270 containerd[2005]: time="2025-07-15T05:17:12.540220663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:12.541911 containerd[2005]: time="2025-07-15T05:17:12.541855339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.433274613s" Jul 15 05:17:12.542034 containerd[2005]: time="2025-07-15T05:17:12.541915464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 05:17:12.544354 containerd[2005]: time="2025-07-15T05:17:12.544323278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:17:12.580035 containerd[2005]: time="2025-07-15T05:17:12.548676743Z" level=info msg="CreateContainer within sandbox \"40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 05:17:12.611201 containerd[2005]: time="2025-07-15T05:17:12.611163451Z" level=info msg="Container bacf566d759ff2777aca816f5c67140b7342a4770edd91e50356a2513ec11a76: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:12.668312 containerd[2005]: time="2025-07-15T05:17:12.668269076Z" level=info msg="CreateContainer within sandbox \"40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bacf566d759ff2777aca816f5c67140b7342a4770edd91e50356a2513ec11a76\"" Jul 15 05:17:12.671173 containerd[2005]: time="2025-07-15T05:17:12.671136678Z" level=info msg="StartContainer for \"bacf566d759ff2777aca816f5c67140b7342a4770edd91e50356a2513ec11a76\"" Jul 15 05:17:12.673707 containerd[2005]: time="2025-07-15T05:17:12.673649757Z" level=info msg="connecting to shim bacf566d759ff2777aca816f5c67140b7342a4770edd91e50356a2513ec11a76" address="unix:///run/containerd/s/17c4549eccf3110954181371f57f3ecb4ccb6a45a7830e32ca48c4201f6674c5" protocol=ttrpc version=3 Jul 15 05:17:12.733311 systemd[1]: Started cri-containerd-bacf566d759ff2777aca816f5c67140b7342a4770edd91e50356a2513ec11a76.scope - libcontainer container bacf566d759ff2777aca816f5c67140b7342a4770edd91e50356a2513ec11a76. Jul 15 05:17:12.822326 containerd[2005]: time="2025-07-15T05:17:12.822197187Z" level=info msg="StartContainer for \"bacf566d759ff2777aca816f5c67140b7342a4770edd91e50356a2513ec11a76\" returns successfully" Jul 15 05:17:13.010615 systemd[1]: Started sshd@11-172.31.23.74:22-139.178.89.65:40774.service - OpenSSH per-connection server daemon (139.178.89.65:40774). Jul 15 05:17:13.262264 sshd[5896]: Accepted publickey for core from 139.178.89.65 port 40774 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:13.265868 sshd-session[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:13.271466 systemd-logind[1981]: New session 12 of user core. Jul 15 05:17:13.278315 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 05:17:14.557502 sshd[5899]: Connection closed by 139.178.89.65 port 40774 Jul 15 05:17:14.560263 sshd-session[5896]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:14.571790 systemd[1]: sshd@11-172.31.23.74:22-139.178.89.65:40774.service: Deactivated successfully. Jul 15 05:17:14.576006 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 05:17:14.580869 systemd-logind[1981]: Session 12 logged out. Waiting for processes to exit. Jul 15 05:17:14.607259 systemd[1]: Started sshd@12-172.31.23.74:22-139.178.89.65:40778.service - OpenSSH per-connection server daemon (139.178.89.65:40778). Jul 15 05:17:14.614845 systemd-logind[1981]: Removed session 12. Jul 15 05:17:14.810695 sshd[5916]: Accepted publickey for core from 139.178.89.65 port 40778 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:14.814458 sshd-session[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:14.829456 systemd-logind[1981]: New session 13 of user core. Jul 15 05:17:14.836120 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 05:17:15.447076 sshd[5919]: Connection closed by 139.178.89.65 port 40778 Jul 15 05:17:15.445666 sshd-session[5916]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:15.462286 systemd[1]: sshd@12-172.31.23.74:22-139.178.89.65:40778.service: Deactivated successfully. Jul 15 05:17:15.465379 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 05:17:15.470946 systemd-logind[1981]: Session 13 logged out. Waiting for processes to exit. Jul 15 05:17:15.500983 systemd[1]: Started sshd@13-172.31.23.74:22-139.178.89.65:40790.service - OpenSSH per-connection server daemon (139.178.89.65:40790). Jul 15 05:17:15.508948 systemd-logind[1981]: Removed session 13. Jul 15 05:17:15.746581 sshd[5933]: Accepted publickey for core from 139.178.89.65 port 40790 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:15.749275 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:15.761055 systemd-logind[1981]: New session 14 of user core. Jul 15 05:17:15.767280 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 05:17:16.293551 sshd[5938]: Connection closed by 139.178.89.65 port 40790 Jul 15 05:17:16.294606 sshd-session[5933]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:16.309271 systemd-logind[1981]: Session 14 logged out. Waiting for processes to exit. Jul 15 05:17:16.310196 systemd[1]: sshd@13-172.31.23.74:22-139.178.89.65:40790.service: Deactivated successfully. Jul 15 05:17:16.314581 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 05:17:16.318555 systemd-logind[1981]: Removed session 14. Jul 15 05:17:17.090901 containerd[2005]: time="2025-07-15T05:17:17.089598565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:17.092328 containerd[2005]: time="2025-07-15T05:17:17.091869315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 05:17:17.104909 containerd[2005]: time="2025-07-15T05:17:17.103948217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.559438282s" Jul 15 05:17:17.104909 containerd[2005]: time="2025-07-15T05:17:17.103995694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:17:17.115902 containerd[2005]: time="2025-07-15T05:17:17.113737655Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:17.115902 containerd[2005]: time="2025-07-15T05:17:17.114471365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:17.153379 containerd[2005]: time="2025-07-15T05:17:17.153345121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:17:17.167017 containerd[2005]: time="2025-07-15T05:17:17.166977194Z" level=info msg="CreateContainer within sandbox \"060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:17:17.188926 containerd[2005]: time="2025-07-15T05:17:17.188411032Z" level=info msg="Container 4d58b79e9e8ab27c1d767e01253e70249f386cf10baff6a7fbaba8e3a67e140f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:17.214678 containerd[2005]: time="2025-07-15T05:17:17.214622338Z" level=info msg="CreateContainer within sandbox \"060f92ae965d5018f5b5b15ad16eb3872429a794faad1a8f6b31b1406c73fdc3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4d58b79e9e8ab27c1d767e01253e70249f386cf10baff6a7fbaba8e3a67e140f\"" Jul 15 05:17:17.216441 containerd[2005]: time="2025-07-15T05:17:17.216397804Z" level=info msg="StartContainer for \"4d58b79e9e8ab27c1d767e01253e70249f386cf10baff6a7fbaba8e3a67e140f\"" Jul 15 05:17:17.221905 containerd[2005]: time="2025-07-15T05:17:17.220918061Z" level=info msg="connecting to shim 4d58b79e9e8ab27c1d767e01253e70249f386cf10baff6a7fbaba8e3a67e140f" address="unix:///run/containerd/s/8e5ef1cd0d1ce5aa76ceff5f350687cd36d3b70a89f81e4a226adab4eda158f7" protocol=ttrpc version=3 Jul 15 05:17:17.305060 systemd[1]: Started cri-containerd-4d58b79e9e8ab27c1d767e01253e70249f386cf10baff6a7fbaba8e3a67e140f.scope - libcontainer container 4d58b79e9e8ab27c1d767e01253e70249f386cf10baff6a7fbaba8e3a67e140f. Jul 15 05:17:17.417860 containerd[2005]: time="2025-07-15T05:17:17.417330133Z" level=info msg="StartContainer for \"4d58b79e9e8ab27c1d767e01253e70249f386cf10baff6a7fbaba8e3a67e140f\" returns successfully" Jul 15 05:17:17.592530 containerd[2005]: time="2025-07-15T05:17:17.591965457Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:17.595465 containerd[2005]: time="2025-07-15T05:17:17.595426374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 05:17:17.598368 containerd[2005]: time="2025-07-15T05:17:17.598320782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 444.767396ms" Jul 15 05:17:17.598508 containerd[2005]: time="2025-07-15T05:17:17.598492804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:17:17.599573 containerd[2005]: time="2025-07-15T05:17:17.599544627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 05:17:17.602765 containerd[2005]: time="2025-07-15T05:17:17.602730009Z" level=info msg="CreateContainer within sandbox \"56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:17:17.627072 containerd[2005]: time="2025-07-15T05:17:17.627017212Z" level=info msg="Container b389dbad12caf73f2a322b2d27f5a29903d02a799a4c38166610ce44839ca9f7: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:17.647358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696415862.mount: Deactivated successfully. Jul 15 05:17:17.652906 containerd[2005]: time="2025-07-15T05:17:17.652480355Z" level=info msg="CreateContainer within sandbox \"56d23e4809a488ae070fab67d84edc39b1b02ddab019cf619ea2f571049f6c03\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b389dbad12caf73f2a322b2d27f5a29903d02a799a4c38166610ce44839ca9f7\"" Jul 15 05:17:17.654435 containerd[2005]: time="2025-07-15T05:17:17.654332622Z" level=info msg="StartContainer for \"b389dbad12caf73f2a322b2d27f5a29903d02a799a4c38166610ce44839ca9f7\"" Jul 15 05:17:17.656000 containerd[2005]: time="2025-07-15T05:17:17.655607434Z" level=info msg="connecting to shim b389dbad12caf73f2a322b2d27f5a29903d02a799a4c38166610ce44839ca9f7" address="unix:///run/containerd/s/104dca45853d5f8e97e88ce6d1024ec7709c276bc9f46ddae254bd935a9b67a4" protocol=ttrpc version=3 Jul 15 05:17:17.717098 systemd[1]: Started cri-containerd-b389dbad12caf73f2a322b2d27f5a29903d02a799a4c38166610ce44839ca9f7.scope - libcontainer container b389dbad12caf73f2a322b2d27f5a29903d02a799a4c38166610ce44839ca9f7. Jul 15 05:17:18.194930 containerd[2005]: time="2025-07-15T05:17:18.194735162Z" level=info msg="StartContainer for \"b389dbad12caf73f2a322b2d27f5a29903d02a799a4c38166610ce44839ca9f7\" returns successfully" Jul 15 05:17:18.388272 kubelet[3318]: I0715 05:17:18.387737 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d8ff57bfb-v9955" podStartSLOduration=32.924864869 podStartE2EDuration="51.379482885s" podCreationTimestamp="2025-07-15 05:16:27 +0000 UTC" firstStartedPulling="2025-07-15 05:16:59.144688261 +0000 UTC m=+50.706884047" lastFinishedPulling="2025-07-15 05:17:17.599306281 +0000 UTC m=+69.161502063" observedRunningTime="2025-07-15 05:17:18.364794064 +0000 UTC m=+69.926989888" watchObservedRunningTime="2025-07-15 05:17:18.379482885 +0000 UTC m=+69.941678686" Jul 15 05:17:18.411607 kubelet[3318]: I0715 05:17:18.411546 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d8ff57bfb-4dgwh" podStartSLOduration=33.295436751 podStartE2EDuration="51.411530577s" podCreationTimestamp="2025-07-15 05:16:27 +0000 UTC" firstStartedPulling="2025-07-15 05:16:59.03679723 +0000 UTC m=+50.598993021" lastFinishedPulling="2025-07-15 05:17:17.152891045 +0000 UTC m=+68.715086847" observedRunningTime="2025-07-15 05:17:18.411134081 +0000 UTC m=+69.973329894" watchObservedRunningTime="2025-07-15 05:17:18.411530577 +0000 UTC m=+69.973726409" Jul 15 05:17:19.362267 kubelet[3318]: I0715 05:17:19.362222 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:17:20.772851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679208616.mount: Deactivated successfully. Jul 15 05:17:20.800042 containerd[2005]: time="2025-07-15T05:17:20.799983041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:20.802904 containerd[2005]: time="2025-07-15T05:17:20.802705580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 05:17:20.803724 containerd[2005]: time="2025-07-15T05:17:20.803608502Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:20.809745 containerd[2005]: time="2025-07-15T05:17:20.809696994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:20.810073 containerd[2005]: time="2025-07-15T05:17:20.810035326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.210457932s" Jul 15 05:17:20.810156 containerd[2005]: time="2025-07-15T05:17:20.810078675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 05:17:20.812843 containerd[2005]: time="2025-07-15T05:17:20.812708535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 05:17:20.816186 containerd[2005]: time="2025-07-15T05:17:20.816148232Z" level=info msg="CreateContainer within sandbox \"4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 05:17:20.861916 containerd[2005]: time="2025-07-15T05:17:20.860084363Z" level=info msg="Container e204a66cf65832b552e00b64899eabdc2fe5161bc47aac54effa7619fe55c1d8: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:20.873341 containerd[2005]: time="2025-07-15T05:17:20.873296774Z" level=info msg="CreateContainer within sandbox \"4b39e07bad710ca5ff30e781e670c1bfc19fbe320993d6c051346e731d981910\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e204a66cf65832b552e00b64899eabdc2fe5161bc47aac54effa7619fe55c1d8\"" Jul 15 05:17:20.875791 containerd[2005]: time="2025-07-15T05:17:20.875661954Z" level=info msg="StartContainer for \"e204a66cf65832b552e00b64899eabdc2fe5161bc47aac54effa7619fe55c1d8\"" Jul 15 05:17:20.882155 containerd[2005]: time="2025-07-15T05:17:20.882105769Z" level=info msg="connecting to shim e204a66cf65832b552e00b64899eabdc2fe5161bc47aac54effa7619fe55c1d8" address="unix:///run/containerd/s/59059f37001970755e12f17dd8edc9854f199211459a0e2e0145d0d0d7dbe797" protocol=ttrpc version=3 Jul 15 05:17:20.937828 systemd[1]: Started cri-containerd-e204a66cf65832b552e00b64899eabdc2fe5161bc47aac54effa7619fe55c1d8.scope - libcontainer container e204a66cf65832b552e00b64899eabdc2fe5161bc47aac54effa7619fe55c1d8. Jul 15 05:17:21.339787 systemd[1]: Started sshd@14-172.31.23.74:22-139.178.89.65:36950.service - OpenSSH per-connection server daemon (139.178.89.65:36950). Jul 15 05:17:21.376619 containerd[2005]: time="2025-07-15T05:17:21.375473533Z" level=info msg="StartContainer for \"e204a66cf65832b552e00b64899eabdc2fe5161bc47aac54effa7619fe55c1d8\" returns successfully" Jul 15 05:17:21.651042 sshd[6072]: Accepted publickey for core from 139.178.89.65 port 36950 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:21.666107 sshd-session[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:21.680007 systemd-logind[1981]: New session 15 of user core. Jul 15 05:17:21.686082 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 05:17:23.042364 sshd[6082]: Connection closed by 139.178.89.65 port 36950 Jul 15 05:17:23.042897 sshd-session[6072]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:23.062819 systemd[1]: sshd@14-172.31.23.74:22-139.178.89.65:36950.service: Deactivated successfully. Jul 15 05:17:23.066205 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 05:17:23.070645 systemd-logind[1981]: Session 15 logged out. Waiting for processes to exit. Jul 15 05:17:23.086175 systemd[1]: Started sshd@15-172.31.23.74:22-139.178.89.65:36964.service - OpenSSH per-connection server daemon (139.178.89.65:36964). Jul 15 05:17:23.106822 systemd-logind[1981]: Removed session 15. Jul 15 05:17:23.416789 sshd[6100]: Accepted publickey for core from 139.178.89.65 port 36964 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:23.421628 sshd-session[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:23.440394 systemd-logind[1981]: New session 16 of user core. Jul 15 05:17:23.447419 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 05:17:23.553657 containerd[2005]: time="2025-07-15T05:17:23.553607534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:23.557132 containerd[2005]: time="2025-07-15T05:17:23.557078512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 05:17:23.558019 containerd[2005]: time="2025-07-15T05:17:23.557960317Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:23.563443 containerd[2005]: time="2025-07-15T05:17:23.563381987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:23.564294 containerd[2005]: time="2025-07-15T05:17:23.564188575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.751435796s" Jul 15 05:17:23.564561 containerd[2005]: time="2025-07-15T05:17:23.564233789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 05:17:23.583308 containerd[2005]: time="2025-07-15T05:17:23.583255170Z" level=info msg="CreateContainer within sandbox \"40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 05:17:23.689866 containerd[2005]: time="2025-07-15T05:17:23.689757091Z" level=info msg="Container af547c765f3c9725dac6632a15ab5a2e825497a0e50b8dde6c8d22ea6e0e36b6: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:23.707473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591325421.mount: Deactivated successfully. Jul 15 05:17:23.725042 containerd[2005]: time="2025-07-15T05:17:23.724993726Z" level=info msg="CreateContainer within sandbox \"40e31e14daba9bde2544d90724ee9367dcd31cfdbf4203f0c20a3b2d2b18094b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"af547c765f3c9725dac6632a15ab5a2e825497a0e50b8dde6c8d22ea6e0e36b6\"" Jul 15 05:17:23.725896 containerd[2005]: time="2025-07-15T05:17:23.725841487Z" level=info msg="StartContainer for \"af547c765f3c9725dac6632a15ab5a2e825497a0e50b8dde6c8d22ea6e0e36b6\"" Jul 15 05:17:23.727972 containerd[2005]: time="2025-07-15T05:17:23.727913685Z" level=info msg="connecting to shim af547c765f3c9725dac6632a15ab5a2e825497a0e50b8dde6c8d22ea6e0e36b6" address="unix:///run/containerd/s/17c4549eccf3110954181371f57f3ecb4ccb6a45a7830e32ca48c4201f6674c5" protocol=ttrpc version=3 Jul 15 05:17:23.865480 systemd[1]: Started cri-containerd-af547c765f3c9725dac6632a15ab5a2e825497a0e50b8dde6c8d22ea6e0e36b6.scope - libcontainer container af547c765f3c9725dac6632a15ab5a2e825497a0e50b8dde6c8d22ea6e0e36b6. Jul 15 05:17:23.990766 containerd[2005]: time="2025-07-15T05:17:23.990662375Z" level=info msg="StartContainer for \"af547c765f3c9725dac6632a15ab5a2e825497a0e50b8dde6c8d22ea6e0e36b6\" returns successfully" Jul 15 05:17:24.594245 kubelet[3318]: I0715 05:17:24.547837 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-75bcfd49d-52jpz" podStartSLOduration=8.96140063 podStartE2EDuration="31.534296786s" podCreationTimestamp="2025-07-15 05:16:53 +0000 UTC" firstStartedPulling="2025-07-15 05:16:58.239624069 +0000 UTC m=+49.801819858" lastFinishedPulling="2025-07-15 05:17:20.812520215 +0000 UTC m=+72.374716014" observedRunningTime="2025-07-15 05:17:21.548264369 +0000 UTC m=+73.110460174" watchObservedRunningTime="2025-07-15 05:17:24.534296786 +0000 UTC m=+76.096492588" Jul 15 05:17:25.058708 kubelet[3318]: I0715 05:17:25.047733 3318 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 05:17:25.064319 kubelet[3318]: I0715 05:17:25.064283 3318 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 05:17:25.235437 kubelet[3318]: I0715 05:17:25.235282 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vdxxc" podStartSLOduration=29.528191085 podStartE2EDuration="54.235263276s" podCreationTimestamp="2025-07-15 05:16:31 +0000 UTC" firstStartedPulling="2025-07-15 05:16:58.858986949 +0000 UTC m=+50.421182732" lastFinishedPulling="2025-07-15 05:17:23.56605913 +0000 UTC m=+75.128254923" observedRunningTime="2025-07-15 05:17:24.594433401 +0000 UTC m=+76.156629197" watchObservedRunningTime="2025-07-15 05:17:25.235263276 +0000 UTC m=+76.797459077" Jul 15 05:17:27.185636 sshd[6103]: Connection closed by 139.178.89.65 port 36964 Jul 15 05:17:27.191313 sshd-session[6100]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:27.205777 systemd[1]: sshd@15-172.31.23.74:22-139.178.89.65:36964.service: Deactivated successfully. Jul 15 05:17:27.210199 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 05:17:27.246541 systemd-logind[1981]: Session 16 logged out. Waiting for processes to exit. Jul 15 05:17:27.250411 systemd[1]: Started sshd@16-172.31.23.74:22-139.178.89.65:36966.service - OpenSSH per-connection server daemon (139.178.89.65:36966). Jul 15 05:17:27.256021 systemd-logind[1981]: Removed session 16. Jul 15 05:17:27.520051 sshd[6172]: Accepted publickey for core from 139.178.89.65 port 36966 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:27.522346 sshd-session[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:27.530057 systemd-logind[1981]: New session 17 of user core. Jul 15 05:17:27.537257 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 05:17:28.739416 containerd[2005]: time="2025-07-15T05:17:28.739303678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7\" id:\"b3165ecb4adc4ed680b209abd1b78e4fc11618d8d38e5b0c4dcd1039beef2cc2\" pid:6156 exited_at:{seconds:1752556648 nanos:484809461}" Jul 15 05:17:32.040095 sshd[6177]: Connection closed by 139.178.89.65 port 36966 Jul 15 05:17:32.114471 systemd[1]: sshd@16-172.31.23.74:22-139.178.89.65:36966.service: Deactivated successfully. Jul 15 05:17:32.044022 sshd-session[6172]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:32.122523 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 05:17:32.122805 systemd[1]: session-17.scope: Consumed 898ms CPU time, 77.1M memory peak. Jul 15 05:17:32.128235 systemd-logind[1981]: Session 17 logged out. Waiting for processes to exit. Jul 15 05:17:32.139780 systemd[1]: Started sshd@17-172.31.23.74:22-139.178.89.65:43842.service - OpenSSH per-connection server daemon (139.178.89.65:43842). Jul 15 05:17:32.149610 systemd-logind[1981]: Removed session 17. Jul 15 05:17:32.475070 sshd[6199]: Accepted publickey for core from 139.178.89.65 port 43842 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:32.481328 sshd-session[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:32.505684 systemd-logind[1981]: New session 18 of user core. Jul 15 05:17:32.512604 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 05:17:33.169840 kubelet[3318]: I0715 05:17:33.169025 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:17:35.015579 sshd[6205]: Connection closed by 139.178.89.65 port 43842 Jul 15 05:17:35.141231 systemd[1]: Started sshd@18-172.31.23.74:22-139.178.89.65:43858.service - OpenSSH per-connection server daemon (139.178.89.65:43858). Jul 15 05:17:35.156832 sshd-session[6199]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:35.223804 systemd[1]: sshd@17-172.31.23.74:22-139.178.89.65:43842.service: Deactivated successfully. Jul 15 05:17:35.229585 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 05:17:35.229924 systemd[1]: session-18.scope: Consumed 1.019s CPU time, 67.6M memory peak. Jul 15 05:17:35.231773 systemd-logind[1981]: Session 18 logged out. Waiting for processes to exit. Jul 15 05:17:35.236944 systemd-logind[1981]: Removed session 18. Jul 15 05:17:35.531004 sshd[6222]: Accepted publickey for core from 139.178.89.65 port 43858 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:35.532721 sshd-session[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:35.542948 systemd-logind[1981]: New session 19 of user core. Jul 15 05:17:35.550350 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 05:17:35.919360 sshd[6232]: Connection closed by 139.178.89.65 port 43858 Jul 15 05:17:35.922289 sshd-session[6222]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:35.935587 systemd-logind[1981]: Session 19 logged out. Waiting for processes to exit. Jul 15 05:17:35.936321 systemd[1]: sshd@18-172.31.23.74:22-139.178.89.65:43858.service: Deactivated successfully. Jul 15 05:17:35.941990 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 05:17:35.948530 systemd-logind[1981]: Removed session 19. Jul 15 05:17:40.955425 systemd[1]: Started sshd@19-172.31.23.74:22-139.178.89.65:44204.service - OpenSSH per-connection server daemon (139.178.89.65:44204). Jul 15 05:17:41.153899 sshd[6250]: Accepted publickey for core from 139.178.89.65 port 44204 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:41.155376 sshd-session[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:41.162280 systemd-logind[1981]: New session 20 of user core. Jul 15 05:17:41.168542 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 05:17:41.451957 sshd[6253]: Connection closed by 139.178.89.65 port 44204 Jul 15 05:17:41.454896 sshd-session[6250]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:41.477555 systemd[1]: sshd@19-172.31.23.74:22-139.178.89.65:44204.service: Deactivated successfully. Jul 15 05:17:41.485012 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 05:17:41.494957 systemd-logind[1981]: Session 20 logged out. Waiting for processes to exit. Jul 15 05:17:41.497983 systemd-logind[1981]: Removed session 20. Jul 15 05:17:42.365986 containerd[2005]: time="2025-07-15T05:17:42.325621315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d\" id:\"eb781002e341e7c9c93432ff0d45145a5a6c55422aaa3256a44d3bec5d63261a\" pid:6292 exited_at:{seconds:1752556662 nanos:154609201}" Jul 15 05:17:42.781571 containerd[2005]: time="2025-07-15T05:17:42.781527008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134\" id:\"0a0f71150ea48e765ea11276af5352d6b95f1f1dffd45481cb20ebaab7fca2db\" pid:6293 exited_at:{seconds:1752556662 nanos:781096472}" Jul 15 05:17:46.499164 systemd[1]: Started sshd@20-172.31.23.74:22-139.178.89.65:44216.service - OpenSSH per-connection server daemon (139.178.89.65:44216). Jul 15 05:17:46.841068 sshd[6317]: Accepted publickey for core from 139.178.89.65 port 44216 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:46.845393 sshd-session[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:46.855985 systemd-logind[1981]: New session 21 of user core. Jul 15 05:17:46.863682 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 05:17:47.845027 sshd[6320]: Connection closed by 139.178.89.65 port 44216 Jul 15 05:17:47.847100 sshd-session[6317]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:47.855949 systemd-logind[1981]: Session 21 logged out. Waiting for processes to exit. Jul 15 05:17:47.857051 systemd[1]: sshd@20-172.31.23.74:22-139.178.89.65:44216.service: Deactivated successfully. Jul 15 05:17:47.860586 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 05:17:47.869085 systemd-logind[1981]: Removed session 21. Jul 15 05:17:52.879797 systemd[1]: Started sshd@21-172.31.23.74:22-139.178.89.65:41798.service - OpenSSH per-connection server daemon (139.178.89.65:41798). Jul 15 05:17:53.107427 sshd[6334]: Accepted publickey for core from 139.178.89.65 port 41798 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:53.112201 sshd-session[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:53.119223 systemd-logind[1981]: New session 22 of user core. Jul 15 05:17:53.125088 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 05:17:53.657845 sshd[6337]: Connection closed by 139.178.89.65 port 41798 Jul 15 05:17:53.659079 sshd-session[6334]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:53.667372 systemd[1]: sshd@21-172.31.23.74:22-139.178.89.65:41798.service: Deactivated successfully. Jul 15 05:17:53.673074 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 05:17:53.675733 systemd-logind[1981]: Session 22 logged out. Waiting for processes to exit. Jul 15 05:17:53.680842 systemd-logind[1981]: Removed session 22. Jul 15 05:17:56.120369 containerd[2005]: time="2025-07-15T05:17:56.120314384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7\" id:\"fcc58494a231a7f443862d6ddca49bf67468e6681d1da99aac737aa28d52314d\" pid:6359 exited_at:{seconds:1752556676 nanos:119825072}" Jul 15 05:17:58.560788 containerd[2005]: time="2025-07-15T05:17:58.560716561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134\" id:\"1f4828ff12de0bed9290e31855a9495dcdf5c96acb406105d5a6dd1aa522cb7c\" pid:6384 exited_at:{seconds:1752556678 nanos:558474124}" Jul 15 05:17:58.702826 systemd[1]: Started sshd@22-172.31.23.74:22-139.178.89.65:41814.service - OpenSSH per-connection server daemon (139.178.89.65:41814). Jul 15 05:17:59.048054 sshd[6397]: Accepted publickey for core from 139.178.89.65 port 41814 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:59.050959 sshd-session[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:59.059933 systemd-logind[1981]: New session 23 of user core. Jul 15 05:17:59.065105 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 05:18:00.068453 sshd[6400]: Connection closed by 139.178.89.65 port 41814 Jul 15 05:18:00.069523 sshd-session[6397]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:00.076040 systemd-logind[1981]: Session 23 logged out. Waiting for processes to exit. Jul 15 05:18:00.078751 systemd[1]: sshd@22-172.31.23.74:22-139.178.89.65:41814.service: Deactivated successfully. Jul 15 05:18:00.084994 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 05:18:00.091340 systemd-logind[1981]: Removed session 23. Jul 15 05:18:00.598925 containerd[2005]: time="2025-07-15T05:18:00.598847071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d\" id:\"c600a818a5f25a572a4ae717bff2163c4fe28e2d6cec169820b397b8bf78731c\" pid:6423 exited_at:{seconds:1752556680 nanos:597000328}" Jul 15 05:18:05.103344 systemd[1]: Started sshd@23-172.31.23.74:22-139.178.89.65:60526.service - OpenSSH per-connection server daemon (139.178.89.65:60526). Jul 15 05:18:05.372416 sshd[6433]: Accepted publickey for core from 139.178.89.65 port 60526 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:18:05.377135 sshd-session[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:05.388936 systemd-logind[1981]: New session 24 of user core. Jul 15 05:18:05.392149 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 05:18:06.649382 sshd[6436]: Connection closed by 139.178.89.65 port 60526 Jul 15 05:18:06.650104 sshd-session[6433]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:06.658330 systemd[1]: sshd@23-172.31.23.74:22-139.178.89.65:60526.service: Deactivated successfully. Jul 15 05:18:06.664025 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 05:18:06.670371 systemd-logind[1981]: Session 24 logged out. Waiting for processes to exit. Jul 15 05:18:06.673225 systemd-logind[1981]: Removed session 24. Jul 15 05:18:11.637932 containerd[2005]: time="2025-07-15T05:18:11.612818403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d\" id:\"651ae5d786d6af57614b78652568f31131bc8ac5b92c16f092412bc216ac0c38\" pid:6462 exited_at:{seconds:1752556691 nanos:580678792}" Jul 15 05:18:11.830021 containerd[2005]: time="2025-07-15T05:18:11.829948469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134\" id:\"236afd272c28842d96a0d78c7e6c6f9a3f536eef6b814de36270639422e952c0\" pid:6484 exited_at:{seconds:1752556691 nanos:829165505}" Jul 15 05:18:20.677866 kubelet[3318]: E0715 05:18:20.667965 3318 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-23-74)" Jul 15 05:18:20.861853 systemd[1]: cri-containerd-0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443.scope: Deactivated successfully. Jul 15 05:18:20.863545 systemd[1]: cri-containerd-0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443.scope: Consumed 16.788s CPU time, 104.3M memory peak, 88.5M read from disk. Jul 15 05:18:20.948447 containerd[2005]: time="2025-07-15T05:18:20.948322363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443\" id:\"0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443\" pid:3836 exit_status:1 exited_at:{seconds:1752556700 nanos:914034911}" Jul 15 05:18:20.950864 containerd[2005]: time="2025-07-15T05:18:20.950790635Z" level=info msg="received exit event container_id:\"0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443\" id:\"0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443\" pid:3836 exit_status:1 exited_at:{seconds:1752556700 nanos:914034911}" Jul 15 05:18:21.069323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443-rootfs.mount: Deactivated successfully. Jul 15 05:18:21.565083 systemd[1]: cri-containerd-4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c.scope: Deactivated successfully. Jul 15 05:18:21.566219 systemd[1]: cri-containerd-4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c.scope: Consumed 3.978s CPU time, 88.2M memory peak, 119.7M read from disk. Jul 15 05:18:21.570373 containerd[2005]: time="2025-07-15T05:18:21.570308197Z" level=info msg="received exit event container_id:\"4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c\" id:\"4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c\" pid:3170 exit_status:1 exited_at:{seconds:1752556701 nanos:569675687}" Jul 15 05:18:21.571804 containerd[2005]: time="2025-07-15T05:18:21.571478103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c\" id:\"4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c\" pid:3170 exit_status:1 exited_at:{seconds:1752556701 nanos:569675687}" Jul 15 05:18:21.606034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c-rootfs.mount: Deactivated successfully. Jul 15 05:18:21.953601 kubelet[3318]: I0715 05:18:21.953359 3318 scope.go:117] "RemoveContainer" containerID="4a7e92e6401ef0c5e2b0c595360610c6600879c9e4e29de87f23b515117b882c" Jul 15 05:18:21.954154 kubelet[3318]: I0715 05:18:21.953661 3318 scope.go:117] "RemoveContainer" containerID="0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443" Jul 15 05:18:22.003916 containerd[2005]: time="2025-07-15T05:18:22.002910429Z" level=info msg="CreateContainer within sandbox \"bab6212173080ada15f04f524ef07032fd1e6a814d2abdce64d4f298a0c63e67\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 15 05:18:22.004677 containerd[2005]: time="2025-07-15T05:18:22.004501071Z" level=info msg="CreateContainer within sandbox \"1c36f497d60968b9a562459d936a4846bab3d0bdbd168d0584c9598ebc4797c4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 15 05:18:22.182246 containerd[2005]: time="2025-07-15T05:18:22.182199270Z" level=info msg="Container 71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:18:22.211546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3813354050.mount: Deactivated successfully. Jul 15 05:18:22.218245 containerd[2005]: time="2025-07-15T05:18:22.217824257Z" level=info msg="Container 2e00b31f3262e99013bb1b366cb88062ca870019d0cd34753d54eca30622a579: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:18:22.237124 containerd[2005]: time="2025-07-15T05:18:22.237063577Z" level=info msg="CreateContainer within sandbox \"1c36f497d60968b9a562459d936a4846bab3d0bdbd168d0584c9598ebc4797c4\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0\"" Jul 15 05:18:22.238650 containerd[2005]: time="2025-07-15T05:18:22.238606789Z" level=info msg="CreateContainer within sandbox \"bab6212173080ada15f04f524ef07032fd1e6a814d2abdce64d4f298a0c63e67\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2e00b31f3262e99013bb1b366cb88062ca870019d0cd34753d54eca30622a579\"" Jul 15 05:18:22.242035 containerd[2005]: time="2025-07-15T05:18:22.241995308Z" level=info msg="StartContainer for \"2e00b31f3262e99013bb1b366cb88062ca870019d0cd34753d54eca30622a579\"" Jul 15 05:18:22.242234 containerd[2005]: time="2025-07-15T05:18:22.242216759Z" level=info msg="StartContainer for \"71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0\"" Jul 15 05:18:22.246293 containerd[2005]: time="2025-07-15T05:18:22.246214086Z" level=info msg="connecting to shim 71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0" address="unix:///run/containerd/s/9bbfa931fe9f7b095f5ddb1d738b92786d6852f32ca4595906c6878ecac115c1" protocol=ttrpc version=3 Jul 15 05:18:22.247468 containerd[2005]: time="2025-07-15T05:18:22.246725808Z" level=info msg="connecting to shim 2e00b31f3262e99013bb1b366cb88062ca870019d0cd34753d54eca30622a579" address="unix:///run/containerd/s/97addc6be760bf91821424c1dbf001a383c7208a56aed04949c6b64d89794c56" protocol=ttrpc version=3 Jul 15 05:18:22.315269 systemd[1]: Started cri-containerd-2e00b31f3262e99013bb1b366cb88062ca870019d0cd34753d54eca30622a579.scope - libcontainer container 2e00b31f3262e99013bb1b366cb88062ca870019d0cd34753d54eca30622a579. Jul 15 05:18:22.316511 systemd[1]: Started cri-containerd-71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0.scope - libcontainer container 71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0. Jul 15 05:18:22.416704 containerd[2005]: time="2025-07-15T05:18:22.416669934Z" level=info msg="StartContainer for \"71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0\" returns successfully" Jul 15 05:18:22.417157 containerd[2005]: time="2025-07-15T05:18:22.417130293Z" level=info msg="StartContainer for \"2e00b31f3262e99013bb1b366cb88062ca870019d0cd34753d54eca30622a579\" returns successfully" Jul 15 05:18:25.273362 systemd[1]: cri-containerd-eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c.scope: Deactivated successfully. Jul 15 05:18:25.274463 systemd[1]: cri-containerd-eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c.scope: Consumed 2.077s CPU time, 36.9M memory peak, 85M read from disk. Jul 15 05:18:25.275962 containerd[2005]: time="2025-07-15T05:18:25.275891743Z" level=info msg="received exit event container_id:\"eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c\" id:\"eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c\" pid:3163 exit_status:1 exited_at:{seconds:1752556705 nanos:275541712}" Jul 15 05:18:25.276637 containerd[2005]: time="2025-07-15T05:18:25.276202825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c\" id:\"eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c\" pid:3163 exit_status:1 exited_at:{seconds:1752556705 nanos:275541712}" Jul 15 05:18:25.321406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c-rootfs.mount: Deactivated successfully. Jul 15 05:18:25.820740 containerd[2005]: time="2025-07-15T05:18:25.820704632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9322dacc487f0571f6b88e7bb08111a5cbafdd2cc1b24a0272afbd120187ca7\" id:\"878ddcd1d45a5ee30ec9cdcb5b58d1b795e5d16c3344cd37f67e44a3e5272b00\" pid:6615 exited_at:{seconds:1752556705 nanos:820383729}" Jul 15 05:18:25.951279 kubelet[3318]: I0715 05:18:25.951236 3318 scope.go:117] "RemoveContainer" containerID="eae74eacb2c075b73801ea07bcfdb3f8a99e7f9351688512c731318c08ead16c" Jul 15 05:18:25.953994 containerd[2005]: time="2025-07-15T05:18:25.953951124Z" level=info msg="CreateContainer within sandbox \"03bfa9e06d2af7a09d9ed9b8114079a83f22c3bb0f62caa81f2a7f5d8d4c6951\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 15 05:18:25.994804 containerd[2005]: time="2025-07-15T05:18:25.994758864Z" level=info msg="Container a7b0346fefba52768000ff0a89f362bdf3b1dc509541377b389df61e107f86b9: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:18:26.004493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673490125.mount: Deactivated successfully. Jul 15 05:18:26.013361 containerd[2005]: time="2025-07-15T05:18:26.013250484Z" level=info msg="CreateContainer within sandbox \"03bfa9e06d2af7a09d9ed9b8114079a83f22c3bb0f62caa81f2a7f5d8d4c6951\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a7b0346fefba52768000ff0a89f362bdf3b1dc509541377b389df61e107f86b9\"" Jul 15 05:18:26.013905 containerd[2005]: time="2025-07-15T05:18:26.013711215Z" level=info msg="StartContainer for \"a7b0346fefba52768000ff0a89f362bdf3b1dc509541377b389df61e107f86b9\"" Jul 15 05:18:26.014796 containerd[2005]: time="2025-07-15T05:18:26.014739062Z" level=info msg="connecting to shim a7b0346fefba52768000ff0a89f362bdf3b1dc509541377b389df61e107f86b9" address="unix:///run/containerd/s/067d96221bdbdf565b09a2bb8490be3bc17c363675298ef3da2159fb4353c503" protocol=ttrpc version=3 Jul 15 05:18:26.086110 systemd[1]: Started cri-containerd-a7b0346fefba52768000ff0a89f362bdf3b1dc509541377b389df61e107f86b9.scope - libcontainer container a7b0346fefba52768000ff0a89f362bdf3b1dc509541377b389df61e107f86b9. Jul 15 05:18:26.159691 containerd[2005]: time="2025-07-15T05:18:26.159653287Z" level=info msg="StartContainer for \"a7b0346fefba52768000ff0a89f362bdf3b1dc509541377b389df61e107f86b9\" returns successfully" Jul 15 05:18:30.706129 kubelet[3318]: E0715 05:18:30.706046 3318 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 15 05:18:35.193856 systemd[1]: cri-containerd-71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0.scope: Deactivated successfully. Jul 15 05:18:35.194416 systemd[1]: cri-containerd-71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0.scope: Consumed 276ms CPU time, 70.6M memory peak, 39.1M read from disk. Jul 15 05:18:35.211924 containerd[2005]: time="2025-07-15T05:18:35.211869368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0\" id:\"71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0\" pid:6557 exit_status:1 exited_at:{seconds:1752556715 nanos:211549588}" Jul 15 05:18:35.212381 containerd[2005]: time="2025-07-15T05:18:35.211871445Z" level=info msg="received exit event container_id:\"71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0\" id:\"71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0\" pid:6557 exit_status:1 exited_at:{seconds:1752556715 nanos:211549588}" Jul 15 05:18:35.247669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0-rootfs.mount: Deactivated successfully. Jul 15 05:18:36.012479 kubelet[3318]: I0715 05:18:36.012431 3318 scope.go:117] "RemoveContainer" containerID="0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443" Jul 15 05:18:36.013143 kubelet[3318]: I0715 05:18:36.012662 3318 scope.go:117] "RemoveContainer" containerID="71e78bb3cb44b6713b28f60d2c6003b12c45f379b58728e987f3f6fceb0f8ce0" Jul 15 05:18:36.015448 kubelet[3318]: E0715 05:18:36.015301 3318 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-5ghjj_tigera-operator(62223633-458f-42ba-92a3-2b64daa0e32f)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-5ghjj" podUID="62223633-458f-42ba-92a3-2b64daa0e32f" Jul 15 05:18:36.088179 containerd[2005]: time="2025-07-15T05:18:36.088119228Z" level=info msg="RemoveContainer for \"0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443\"" Jul 15 05:18:36.102526 containerd[2005]: time="2025-07-15T05:18:36.102460188Z" level=info msg="RemoveContainer for \"0b4d49b4bb71db6408424490c027205c37ab43b1540d4a53564be84303023443\" returns successfully" Jul 15 05:18:40.721958 kubelet[3318]: E0715 05:18:40.721480 3318 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-23-74)" Jul 15 05:18:41.536899 containerd[2005]: time="2025-07-15T05:18:41.536752970Z" level=info msg="TaskExit event in podsandbox handler container_id:\"657fa8f730947a2bb039a0fb4cca8395ceff71f0227bef5a33306c0c7eec652d\" id:\"d9af350eacde71f14397ad20a3571eaf052dcb8f5a542c19c1532022e1570a67\" pid:6708 exit_status:1 exited_at:{seconds:1752556721 nanos:536135677}" Jul 15 05:18:41.651288 containerd[2005]: time="2025-07-15T05:18:41.651161674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb07b74581f36a0b45e112cb47a8738b07494ccf8833a0cedfb420841b991134\" id:\"098f2b080677317861d96a9335abdf5bd75b86252d22c5aac729c8ef62021f04\" pid:6729 exited_at:{seconds:1752556721 nanos:650232982}"